Nov 28 11:06:40 crc systemd[1]: Starting Kubernetes Kubelet... Nov 28 11:06:40 crc restorecon[4678]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:40 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:06:41 crc restorecon[4678]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:06:41 crc restorecon[4678]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 28 11:06:41 crc kubenswrapper[4772]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 11:06:41 crc kubenswrapper[4772]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 28 11:06:41 crc kubenswrapper[4772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 11:06:41 crc kubenswrapper[4772]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 11:06:41 crc kubenswrapper[4772]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 28 11:06:41 crc kubenswrapper[4772]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.813325 4772 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819349 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819404 4772 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819410 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819415 4772 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819420 4772 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819426 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819431 4772 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819437 4772 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819441 4772 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819445 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819449 4772 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819453 4772 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819458 4772 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819462 4772 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819466 4772 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819477 4772 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819484 4772 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819490 4772 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819495 4772 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819500 4772 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819505 4772 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819509 4772 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819514 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819519 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819523 4772 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819527 4772 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819531 4772 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819534 4772 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819538 4772 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819542 4772 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819546 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819550 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819554 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819558 4772 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819562 4772 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819566 4772 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819570 4772 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819573 4772 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819577 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819581 4772 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819585 4772 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819589 4772 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819593 4772 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819596 4772 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819600 4772 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819604 4772 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819609 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819613 4772 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819618 4772 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819622 4772 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819626 4772 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819631 4772 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819635 4772 feature_gate.go:330] unrecognized feature gate: Example Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819639 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819643 4772 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819648 4772 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819652 4772 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819656 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819659 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819664 4772 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819668 4772 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819672 4772 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819675 4772 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819678 4772 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819682 4772 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819686 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819689 4772 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819693 4772 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819696 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819700 4772 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.819703 4772 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820066 4772 flags.go:64] FLAG: --address="0.0.0.0" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820083 4772 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820099 4772 flags.go:64] FLAG: --anonymous-auth="true" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820106 4772 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820113 4772 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820119 4772 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820125 4772 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820132 4772 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820136 4772 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820142 4772 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820147 4772 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820153 4772 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820158 4772 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820163 4772 flags.go:64] FLAG: --cgroup-root="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820167 4772 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820172 4772 flags.go:64] FLAG: --client-ca-file="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820176 4772 flags.go:64] FLAG: --cloud-config="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820180 4772 flags.go:64] FLAG: --cloud-provider="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820185 4772 flags.go:64] FLAG: --cluster-dns="[]" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820190 4772 flags.go:64] FLAG: --cluster-domain="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820195 4772 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820199 4772 flags.go:64] FLAG: --config-dir="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820204 4772 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820209 4772 flags.go:64] FLAG: --container-log-max-files="5" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820216 4772 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820221 4772 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820226 4772 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820231 4772 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820235 4772 flags.go:64] FLAG: --contention-profiling="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820240 4772 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820244 4772 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820249 4772 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820254 4772 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820260 4772 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820264 4772 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820269 4772 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820273 4772 flags.go:64] FLAG: --enable-load-reader="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820277 4772 flags.go:64] FLAG: --enable-server="true" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820283 4772 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820292 4772 flags.go:64] FLAG: --event-burst="100" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820297 4772 flags.go:64] FLAG: --event-qps="50" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820302 4772 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820307 4772 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820311 4772 flags.go:64] FLAG: --eviction-hard="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820318 4772 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820323 4772 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820327 4772 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820333 4772 flags.go:64] FLAG: --eviction-soft="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820337 4772 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820341 4772 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820346 4772 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820350 4772 flags.go:64] FLAG: --experimental-mounter-path="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820375 4772 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820380 4772 flags.go:64] FLAG: --fail-swap-on="true" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820384 4772 flags.go:64] FLAG: --feature-gates="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820390 4772 flags.go:64] FLAG: --file-check-frequency="20s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820395 4772 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820399 4772 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820404 4772 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820410 4772 flags.go:64] FLAG: --healthz-port="10248" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820415 4772 flags.go:64] FLAG: --help="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820419 4772 flags.go:64] FLAG: --hostname-override="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820424 4772 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820429 4772 flags.go:64] FLAG: --http-check-frequency="20s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820434 4772 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820439 4772 flags.go:64] FLAG: --image-credential-provider-config="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820443 4772 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820448 4772 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820452 4772 flags.go:64] FLAG: --image-service-endpoint="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820457 4772 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820461 4772 flags.go:64] FLAG: --kube-api-burst="100" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820474 4772 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820478 4772 flags.go:64] FLAG: --kube-api-qps="50" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820485 4772 flags.go:64] FLAG: --kube-reserved="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820490 4772 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820494 4772 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820498 4772 flags.go:64] FLAG: --kubelet-cgroups="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820502 4772 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820507 4772 flags.go:64] FLAG: --lock-file="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820512 4772 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820516 4772 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820521 4772 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820533 4772 flags.go:64] FLAG: --log-json-split-stream="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820538 4772 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820543 4772 flags.go:64] FLAG: --log-text-split-stream="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820547 4772 flags.go:64] FLAG: --logging-format="text" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820552 4772 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820556 4772 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820561 4772 flags.go:64] FLAG: --manifest-url="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820565 4772 flags.go:64] FLAG: --manifest-url-header="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820571 4772 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820576 4772 flags.go:64] FLAG: --max-open-files="1000000" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820583 4772 flags.go:64] FLAG: --max-pods="110" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820587 4772 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820591 4772 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820595 4772 flags.go:64] FLAG: --memory-manager-policy="None" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820600 4772 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820604 4772 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820609 4772 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820614 4772 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820626 4772 flags.go:64] FLAG: --node-status-max-images="50" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820630 4772 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820635 4772 flags.go:64] FLAG: --oom-score-adj="-999" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820641 4772 flags.go:64] FLAG: --pod-cidr="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820645 4772 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820654 4772 flags.go:64] FLAG: --pod-manifest-path="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820658 4772 flags.go:64] FLAG: --pod-max-pids="-1" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820662 4772 flags.go:64] FLAG: --pods-per-core="0" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820667 4772 flags.go:64] FLAG: --port="10250" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820671 4772 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820675 4772 flags.go:64] FLAG: --provider-id="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820679 4772 flags.go:64] FLAG: --qos-reserved="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820684 4772 flags.go:64] FLAG: --read-only-port="10255" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820688 4772 flags.go:64] FLAG: --register-node="true" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820693 4772 flags.go:64] FLAG: --register-schedulable="true" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820697 4772 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820706 4772 flags.go:64] FLAG: --registry-burst="10" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820711 4772 flags.go:64] FLAG: --registry-qps="5" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820715 4772 flags.go:64] FLAG: --reserved-cpus="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820720 4772 flags.go:64] FLAG: --reserved-memory="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820726 4772 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820730 4772 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820735 4772 flags.go:64] FLAG: --rotate-certificates="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820739 4772 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820743 4772 flags.go:64] FLAG: --runonce="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820747 4772 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820752 4772 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820756 4772 flags.go:64] FLAG: --seccomp-default="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820760 4772 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820764 4772 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820769 4772 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820774 4772 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820779 4772 flags.go:64] FLAG: --storage-driver-password="root" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820783 4772 flags.go:64] FLAG: --storage-driver-secure="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820788 4772 flags.go:64] FLAG: --storage-driver-table="stats" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820792 4772 flags.go:64] FLAG: --storage-driver-user="root" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820798 4772 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820803 4772 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820808 4772 flags.go:64] FLAG: --system-cgroups="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820812 4772 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820820 4772 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820824 4772 flags.go:64] FLAG: --tls-cert-file="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820828 4772 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820835 4772 flags.go:64] FLAG: --tls-min-version="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820839 4772 flags.go:64] FLAG: --tls-private-key-file="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820843 4772 flags.go:64] FLAG: --topology-manager-policy="none" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820848 4772 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820852 4772 flags.go:64] FLAG: --topology-manager-scope="container" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820856 4772 flags.go:64] FLAG: --v="2" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820863 4772 flags.go:64] FLAG: --version="false" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820870 4772 flags.go:64] FLAG: --vmodule="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820877 4772 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.820881 4772 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821007 4772 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821012 4772 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821024 4772 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821029 4772 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821033 4772 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821037 4772 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821042 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821046 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821050 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821055 4772 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821059 4772 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821063 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821067 4772 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821071 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821074 4772 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821079 4772 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821083 4772 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821087 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821091 4772 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821095 4772 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821098 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821102 4772 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821106 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821109 4772 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821113 4772 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821117 4772 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821121 4772 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821126 4772 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821130 4772 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821134 4772 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821138 4772 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821142 4772 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821145 4772 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821149 4772 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821153 4772 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821157 4772 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821161 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821164 4772 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821168 4772 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821172 4772 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821176 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821180 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821184 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821188 4772 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821193 4772 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821198 4772 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821203 4772 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821210 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821214 4772 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821219 4772 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821223 4772 feature_gate.go:330] unrecognized feature gate: Example Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821228 4772 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821232 4772 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821237 4772 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821242 4772 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821248 4772 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821254 4772 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821260 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821265 4772 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821271 4772 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821276 4772 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821280 4772 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821285 4772 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821289 4772 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821294 4772 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821298 4772 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821303 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821307 4772 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821311 4772 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821315 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.821320 4772 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.821336 4772 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.833671 4772 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.833747 4772 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.833914 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.833931 4772 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.833941 4772 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.833952 4772 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.833962 4772 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.833971 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.833979 4772 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.833988 4772 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.833997 4772 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834007 4772 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834016 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834025 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834037 4772 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834045 4772 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834054 4772 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834063 4772 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834072 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834080 4772 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834089 4772 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834097 4772 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834106 4772 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834114 4772 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834122 4772 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834133 4772 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834145 4772 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834156 4772 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834165 4772 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834174 4772 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834183 4772 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834192 4772 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834200 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834211 4772 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834221 4772 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834231 4772 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834244 4772 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834254 4772 feature_gate.go:330] unrecognized feature gate: Example Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834264 4772 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834273 4772 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834285 4772 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834295 4772 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834305 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834314 4772 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834323 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834331 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834340 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834349 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834384 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834393 4772 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834404 4772 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834414 4772 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834425 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834433 4772 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834442 4772 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834452 4772 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834461 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834472 4772 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834481 4772 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834491 4772 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834501 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834510 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834519 4772 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834528 4772 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834538 4772 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834547 4772 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834555 4772 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834564 4772 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834572 4772 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834581 4772 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834589 4772 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834597 4772 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834608 4772 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.834623 4772 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834918 4772 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834932 4772 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834942 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834951 4772 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834961 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834969 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834978 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834987 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.834996 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835005 4772 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835013 4772 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835024 4772 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835036 4772 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835046 4772 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835056 4772 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835064 4772 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835073 4772 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835084 4772 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835094 4772 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835104 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835112 4772 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835121 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835129 4772 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835140 4772 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835150 4772 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835158 4772 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835166 4772 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835175 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835183 4772 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835191 4772 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835200 4772 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835209 4772 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835218 4772 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835226 4772 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835235 4772 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835244 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835255 4772 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835264 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835273 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835281 4772 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835290 4772 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835299 4772 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835308 4772 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835317 4772 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835326 4772 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835335 4772 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835343 4772 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835351 4772 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835388 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835397 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835408 4772 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835419 4772 feature_gate.go:330] unrecognized feature gate: Example Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835428 4772 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835437 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835445 4772 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835455 4772 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835466 4772 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835475 4772 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835487 4772 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835496 4772 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835505 4772 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835514 4772 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835523 4772 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835531 4772 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835539 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835547 4772 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835556 4772 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835564 4772 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835573 4772 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835581 4772 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.835591 4772 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.835605 4772 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.835954 4772 server.go:940] "Client rotation is on, will bootstrap in background" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.841223 4772 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.841391 4772 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.842328 4772 server.go:997] "Starting client certificate rotation" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.842410 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.842683 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-27 08:40:31.636369277 +0000 UTC Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.842875 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.851900 4772 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 11:06:41 crc kubenswrapper[4772]: E1128 11:06:41.854758 4772 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.855511 4772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.867306 4772 log.go:25] "Validated CRI v1 runtime API" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.888476 4772 log.go:25] "Validated CRI v1 image API" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.891408 4772 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.894693 4772 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-28-11-02-17-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.894730 4772 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.909109 4772 manager.go:217] Machine: {Timestamp:2025-11-28 11:06:41.907605732 +0000 UTC m=+0.230848979 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55 BootID:a05b4f4f-c83a-40e9-9c28-0f224668a04f Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5c:fd:98 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5c:fd:98 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ac:96:bd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:29:f7:c3 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c1:fc:1f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:96:ba:47 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0a:0c:66:63:37:42 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6a:0c:44:0b:1d:c8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.909337 4772 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.909528 4772 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.910348 4772 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.910526 4772 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.910570 4772 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.910811 4772 topology_manager.go:138] "Creating topology manager with none policy" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.910823 4772 container_manager_linux.go:303] "Creating device plugin manager" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.911026 4772 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.911067 4772 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.911244 4772 state_mem.go:36] "Initialized new in-memory state store" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.911334 4772 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.912133 4772 kubelet.go:418] "Attempting to sync node with API server" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.912156 4772 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.912199 4772 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.912216 4772 kubelet.go:324] "Adding apiserver pod source" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.912407 4772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.914301 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 11:06:41 crc kubenswrapper[4772]: E1128 11:06:41.914404 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.914723 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 11:06:41 crc kubenswrapper[4772]: E1128 11:06:41.914773 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.915566 4772 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.915970 4772 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.916810 4772 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.917640 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.917665 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.917673 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.917682 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.917694 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.917702 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.917710 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.917722 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.917735 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.917743 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.917772 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.917780 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.918575 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.920392 4772 server.go:1280] "Started kubelet" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.921309 4772 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.921278 4772 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.921882 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.922075 4772 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 28 11:06:41 crc systemd[1]: Started Kubernetes Kubelet. Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.924457 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.924506 4772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.927970 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:44:57.833791757 +0000 UTC Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.928032 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 651h38m15.905762844s for next certificate rotation Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.928085 4772 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.928132 4772 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.928294 4772 server.go:460] "Adding debug handlers to kubelet server" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.928322 4772 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.928773 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 11:06:41 crc kubenswrapper[4772]: E1128 11:06:41.928869 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Nov 28 11:06:41 crc kubenswrapper[4772]: E1128 11:06:41.928884 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:06:41 crc kubenswrapper[4772]: E1128 11:06:41.929063 4772 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.932737 4772 factory.go:153] Registering CRI-O factory Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.932786 4772 factory.go:221] Registration of the crio container factory successfully Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.932876 4772 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.932891 4772 factory.go:55] Registering systemd factory Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.932899 4772 factory.go:221] Registration of the systemd container factory successfully Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.932932 4772 factory.go:103] Registering Raw factory Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.932952 4772 manager.go:1196] Started watching for new ooms in manager Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.933761 4772 manager.go:319] Starting recovery of all containers Nov 28 11:06:41 crc kubenswrapper[4772]: E1128 11:06:41.933616 4772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c26f84dbb8868 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 11:06:41.920272488 +0000 UTC m=+0.243515715,LastTimestamp:2025-11-28 11:06:41.920272488 +0000 UTC m=+0.243515715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.942028 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.943824 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.943922 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.943984 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944007 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944052 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944086 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944126 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944163 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944206 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944239 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944285 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944309 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944380 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944441 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944469 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944491 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944547 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944604 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944638 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944686 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944709 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944737 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944782 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944813 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944853 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944887 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944932 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.944973 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945017 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945052 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945098 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945123 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945168 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945193 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945222 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945270 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945290 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945334 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945381 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945406 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945434 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945476 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945499 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945518 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945559 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945580 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945621 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945647 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945663 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945679 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945724 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945754 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945794 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945822 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945897 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945923 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945970 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.945989 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946009 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946052 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946069 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946088 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946125 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946141 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946163 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946200 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946217 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946236 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946251 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946294 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946312 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946327 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946405 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946447 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946473 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946491 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946531 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946556 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946572 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946620 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946638 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946657 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946703 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946720 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946764 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946782 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946800 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946826 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946864 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946891 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946917 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946938 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.946967 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947005 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947029 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947055 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947074 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947109 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947126 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947148 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947172 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947190 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947235 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947286 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947320 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947346 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947385 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947410 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947434 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947453 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947482 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947505 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947527 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947552 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947569 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947611 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947637 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947655 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947680 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947697 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947726 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947749 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947766 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947789 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947807 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947824 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947846 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947866 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947887 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947908 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947931 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947953 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947972 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.947993 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948010 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948027 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948050 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948067 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948086 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948109 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948131 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948156 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948172 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948191 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948212 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948233 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948256 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948273 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948348 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948395 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948416 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948440 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948457 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948474 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948497 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948668 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948689 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948705 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948722 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948742 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948761 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948783 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948803 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948820 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948843 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948862 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948883 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948905 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.948924 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949174 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949287 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949332 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949391 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949433 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949457 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949479 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949516 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949538 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949568 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949590 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949613 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949644 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949670 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949700 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949722 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949747 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949807 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949833 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949874 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.949908 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951497 4772 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951615 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951704 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951736 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951770 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951784 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951806 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951824 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951835 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951849 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951863 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951876 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951892 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951904 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951929 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951941 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951954 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951970 4772 reconstruct.go:97] "Volume reconstruction finished" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.951982 4772 reconciler.go:26] "Reconciler: start to sync state" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.957024 4772 manager.go:324] Recovery completed Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.974306 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.976986 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.977047 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.977059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.978294 4772 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.978383 4772 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.978486 4772 state_mem.go:36] "Initialized new in-memory state store" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.990878 4772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 28 11:06:41 crc kubenswrapper[4772]: E1128 11:06:41.992033 4772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c26f84dbb8868 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 11:06:41.920272488 +0000 UTC m=+0.243515715,LastTimestamp:2025-11-28 11:06:41.920272488 +0000 UTC m=+0.243515715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.993033 4772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.993077 4772 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.993112 4772 kubelet.go:2335] "Starting kubelet main sync loop" Nov 28 11:06:41 crc kubenswrapper[4772]: E1128 11:06:41.993163 4772 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.993327 4772 policy_none.go:49] "None policy: Start" Nov 28 11:06:41 crc kubenswrapper[4772]: W1128 11:06:41.993624 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 11:06:41 crc kubenswrapper[4772]: E1128 11:06:41.993684 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.994687 4772 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 28 11:06:41 crc kubenswrapper[4772]: I1128 11:06:41.994726 4772 state_mem.go:35] "Initializing new in-memory state store" Nov 28 11:06:42 crc kubenswrapper[4772]: E1128 11:06:42.029833 4772 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.055572 4772 manager.go:334] "Starting Device Plugin manager" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.055642 4772 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.055660 4772 server.go:79] "Starting device plugin registration server" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.056327 4772 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.056352 4772 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.056784 4772 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.056882 4772 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.056893 4772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 28 11:06:42 crc kubenswrapper[4772]: E1128 11:06:42.068857 4772 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.093972 4772 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.094137 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.096052 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.096117 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.096128 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.096335 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.096524 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.096586 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.097482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.097523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.097536 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.097648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.097734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.097751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.098072 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.098133 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.098560 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.099065 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.099101 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.099116 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.099217 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.099242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.099255 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.099390 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.099487 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.099522 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.100172 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.100206 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.100219 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.100399 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.100410 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.100430 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.100445 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.100545 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.100578 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.101129 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.101131 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.101165 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.101202 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.101222 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.101225 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.101488 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.101522 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.102420 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.102449 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.102460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: E1128 11:06:42.129945 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.153800 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.153884 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.153912 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.153949 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.154023 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.154043 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.154066 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.154206 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.154283 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.154310 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.154343 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.154423 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.154457 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.154480 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.154499 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.157591 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.159224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.159276 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.159289 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.159324 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 11:06:42 crc kubenswrapper[4772]: E1128 11:06:42.159976 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255143 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255200 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255220 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255238 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255257 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255285 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255316 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255339 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255388 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255321 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255413 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255433 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255444 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255466 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255489 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255385 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255499 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255526 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255531 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255556 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255571 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255455 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255618 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255632 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255784 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.256001 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.256015 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.256037 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.256061 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.255651 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.403907 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.405871 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.410427 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.410504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.410960 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 11:06:42 crc kubenswrapper[4772]: E1128 11:06:42.411618 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.419873 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.433799 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: W1128 11:06:42.444957 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5edb08d4886b973ca76a286391dfa40cb756ed51156f7a9b158fc8df6c133809 WatchSource:0}: Error finding container 5edb08d4886b973ca76a286391dfa40cb756ed51156f7a9b158fc8df6c133809: Status 404 returned error can't find the container with id 5edb08d4886b973ca76a286391dfa40cb756ed51156f7a9b158fc8df6c133809 Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.452873 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: W1128 11:06:42.454181 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0e619e1abe74427957070cbcf0ae3659ab8bee9524db913c1cd3ecc956eceeb6 WatchSource:0}: Error finding container 0e619e1abe74427957070cbcf0ae3659ab8bee9524db913c1cd3ecc956eceeb6: Status 404 returned error can't find the container with id 0e619e1abe74427957070cbcf0ae3659ab8bee9524db913c1cd3ecc956eceeb6 Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.460967 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: W1128 11:06:42.466039 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1c1d7b08d4f318479e6561807051f52de8750dc38542e9fa6ea86385f4fc883d WatchSource:0}: Error finding container 1c1d7b08d4f318479e6561807051f52de8750dc38542e9fa6ea86385f4fc883d: Status 404 returned error can't find the container with id 1c1d7b08d4f318479e6561807051f52de8750dc38542e9fa6ea86385f4fc883d Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.466156 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:42 crc kubenswrapper[4772]: W1128 11:06:42.482146 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a3f9ce86e0a85091df70cc8e35aa699ad4efdf40a17bb2a6eacfdc675c0587c7 WatchSource:0}: Error finding container a3f9ce86e0a85091df70cc8e35aa699ad4efdf40a17bb2a6eacfdc675c0587c7: Status 404 returned error can't find the container with id a3f9ce86e0a85091df70cc8e35aa699ad4efdf40a17bb2a6eacfdc675c0587c7 Nov 28 11:06:42 crc kubenswrapper[4772]: W1128 11:06:42.491426 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-50f297d221405af404e269b6073dc950ebfebdd9ccc76ebf036b4bb2b8057585 WatchSource:0}: Error finding container 50f297d221405af404e269b6073dc950ebfebdd9ccc76ebf036b4bb2b8057585: Status 404 returned error can't find the container with id 50f297d221405af404e269b6073dc950ebfebdd9ccc76ebf036b4bb2b8057585 Nov 28 11:06:42 crc kubenswrapper[4772]: E1128 11:06:42.531170 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.812063 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.814915 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.814957 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.814966 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.814991 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 11:06:42 crc kubenswrapper[4772]: E1128 11:06:42.815534 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 28 11:06:42 crc kubenswrapper[4772]: W1128 11:06:42.818286 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 11:06:42 crc kubenswrapper[4772]: E1128 11:06:42.818351 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:06:42 crc kubenswrapper[4772]: W1128 11:06:42.822403 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 11:06:42 crc kubenswrapper[4772]: E1128 11:06:42.822510 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:06:42 crc kubenswrapper[4772]: W1128 11:06:42.843500 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 11:06:42 crc kubenswrapper[4772]: E1128 11:06:42.843620 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.923153 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 11:06:42 crc kubenswrapper[4772]: I1128 11:06:42.999757 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5" exitCode=0 Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:42.999841 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5"} Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:42.999967 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"50f297d221405af404e269b6073dc950ebfebdd9ccc76ebf036b4bb2b8057585"} Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.000110 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.000906 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.000936 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.000946 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.001677 4772 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3" exitCode=0 Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.001736 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3"} Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.001766 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a3f9ce86e0a85091df70cc8e35aa699ad4efdf40a17bb2a6eacfdc675c0587c7"} Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.001843 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.002766 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.002793 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.002803 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.003435 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.004252 4772 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="60fb93a77ff62a118766337daec711f282fe0e4176f7d764d1bb6a75a89d794b" exitCode=0 Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.004296 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.004309 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"60fb93a77ff62a118766337daec711f282fe0e4176f7d764d1bb6a75a89d794b"} Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.004327 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.004329 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1c1d7b08d4f318479e6561807051f52de8750dc38542e9fa6ea86385f4fc883d"} Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.004338 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.006900 4772 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759" exitCode=0 Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.006951 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759"} Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.007001 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0e619e1abe74427957070cbcf0ae3659ab8bee9524db913c1cd3ecc956eceeb6"} Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.007105 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.010327 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa"} Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.010410 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5edb08d4886b973ca76a286391dfa40cb756ed51156f7a9b158fc8df6c133809"} Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.011923 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.012035 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.012054 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:43 crc kubenswrapper[4772]: W1128 11:06:43.243186 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 11:06:43 crc kubenswrapper[4772]: E1128 11:06:43.243263 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:06:43 crc kubenswrapper[4772]: E1128 11:06:43.332496 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.616656 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.619294 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.619351 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.619382 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:43 crc kubenswrapper[4772]: I1128 11:06:43.619419 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.007379 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.013647 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5"} Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.013685 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90"} Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.013696 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799"} Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.013809 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.020611 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.020649 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.020660 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.023092 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5"} Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.023131 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461"} Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.023142 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933"} Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.023245 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.024297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.024423 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.024510 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.026164 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c"} Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.026189 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589"} Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.026200 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a"} Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.026210 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b"} Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.028611 4772 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230" exitCode=0 Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.028709 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.029068 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230"} Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.029301 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.029317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.029325 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.029410 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.030880 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.030930 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:44 crc kubenswrapper[4772]: I1128 11:06:44.030948 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.035553 4772 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605" exitCode=0 Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.035708 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605"} Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.036471 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.038183 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.038228 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.038242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.038410 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"92452e84ded9c12f06914848a4a7c13a94d051449442fa081b235ecd39983d38"} Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.038557 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.039994 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.040019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.040031 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.043509 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.043826 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404"} Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.043935 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.044677 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.044746 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.044775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.044950 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.044988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:45 crc kubenswrapper[4772]: I1128 11:06:45.044998 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:46 crc kubenswrapper[4772]: I1128 11:06:46.049842 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea"} Nov 28 11:06:46 crc kubenswrapper[4772]: I1128 11:06:46.049898 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2"} Nov 28 11:06:46 crc kubenswrapper[4772]: I1128 11:06:46.049914 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7"} Nov 28 11:06:46 crc kubenswrapper[4772]: I1128 11:06:46.049926 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7"} Nov 28 11:06:46 crc kubenswrapper[4772]: I1128 11:06:46.049943 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:46 crc kubenswrapper[4772]: I1128 11:06:46.050044 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:46 crc kubenswrapper[4772]: I1128 11:06:46.050940 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:46 crc kubenswrapper[4772]: I1128 11:06:46.050971 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:46 crc kubenswrapper[4772]: I1128 11:06:46.050995 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:46 crc kubenswrapper[4772]: I1128 11:06:46.471627 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.038013 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.038448 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.040347 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.040435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.040456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.059353 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650"} Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.059422 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.059476 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.060900 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.060949 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.060959 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.060977 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.061014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.061031 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.325637 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.325822 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.327262 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.327324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.327340 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:47 crc kubenswrapper[4772]: I1128 11:06:47.400694 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.062431 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.062450 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.064020 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.064074 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.064096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.064094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.064135 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.064153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.111248 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.279452 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.279761 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.281853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.281913 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:48 crc kubenswrapper[4772]: I1128 11:06:48.281932 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:49 crc kubenswrapper[4772]: I1128 11:06:49.065673 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:49 crc kubenswrapper[4772]: I1128 11:06:49.065865 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:49 crc kubenswrapper[4772]: I1128 11:06:49.067095 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:49 crc kubenswrapper[4772]: I1128 11:06:49.067124 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:49 crc kubenswrapper[4772]: I1128 11:06:49.067133 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:49 crc kubenswrapper[4772]: I1128 11:06:49.067691 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:49 crc kubenswrapper[4772]: I1128 11:06:49.067737 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:49 crc kubenswrapper[4772]: I1128 11:06:49.067754 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:49 crc kubenswrapper[4772]: I1128 11:06:49.152688 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 28 11:06:50 crc kubenswrapper[4772]: I1128 11:06:50.038923 4772 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 11:06:50 crc kubenswrapper[4772]: I1128 11:06:50.039036 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 11:06:50 crc kubenswrapper[4772]: I1128 11:06:50.069736 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:50 crc kubenswrapper[4772]: I1128 11:06:50.071578 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:50 crc kubenswrapper[4772]: I1128 11:06:50.071689 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:50 crc kubenswrapper[4772]: I1128 11:06:50.071711 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:52 crc kubenswrapper[4772]: E1128 11:06:52.070055 4772 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 11:06:52 crc kubenswrapper[4772]: I1128 11:06:52.896487 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:52 crc kubenswrapper[4772]: I1128 11:06:52.896753 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:52 crc kubenswrapper[4772]: I1128 11:06:52.898647 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:52 crc kubenswrapper[4772]: I1128 11:06:52.898720 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:52 crc kubenswrapper[4772]: I1128 11:06:52.898734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:53 crc kubenswrapper[4772]: I1128 11:06:53.107530 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:53 crc kubenswrapper[4772]: I1128 11:06:53.107788 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:53 crc kubenswrapper[4772]: I1128 11:06:53.109232 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:53 crc kubenswrapper[4772]: I1128 11:06:53.109291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:53 crc kubenswrapper[4772]: I1128 11:06:53.109310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:53 crc kubenswrapper[4772]: I1128 11:06:53.114339 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:53 crc kubenswrapper[4772]: E1128 11:06:53.620772 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Nov 28 11:06:53 crc kubenswrapper[4772]: I1128 11:06:53.924182 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 28 11:06:54 crc kubenswrapper[4772]: E1128 11:06:54.009303 4772 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 11:06:54 crc kubenswrapper[4772]: I1128 11:06:54.081030 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:54 crc kubenswrapper[4772]: I1128 11:06:54.082262 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:54 crc kubenswrapper[4772]: I1128 11:06:54.082292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:54 crc kubenswrapper[4772]: I1128 11:06:54.082328 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:54 crc kubenswrapper[4772]: I1128 11:06:54.089980 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:06:54 crc kubenswrapper[4772]: W1128 11:06:54.587738 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 11:06:54 crc kubenswrapper[4772]: I1128 11:06:54.587905 4772 trace.go:236] Trace[1059844486]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 11:06:44.585) (total time: 10002ms): Nov 28 11:06:54 crc kubenswrapper[4772]: Trace[1059844486]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (11:06:54.587) Nov 28 11:06:54 crc kubenswrapper[4772]: Trace[1059844486]: [10.002598696s] [10.002598696s] END Nov 28 11:06:54 crc kubenswrapper[4772]: E1128 11:06:54.587946 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 11:06:54 crc kubenswrapper[4772]: W1128 11:06:54.605575 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 11:06:54 crc kubenswrapper[4772]: I1128 11:06:54.605682 4772 trace.go:236] Trace[397904185]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 11:06:44.603) (total time: 10001ms): Nov 28 11:06:54 crc kubenswrapper[4772]: Trace[397904185]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:06:54.605) Nov 28 11:06:54 crc kubenswrapper[4772]: Trace[397904185]: [10.001931883s] [10.001931883s] END Nov 28 11:06:54 crc kubenswrapper[4772]: E1128 11:06:54.605710 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 11:06:54 crc kubenswrapper[4772]: E1128 11:06:54.934447 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.015999 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.016126 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.021875 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.021961 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.083252 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.084210 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.084235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.084243 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.221529 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.225959 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.226005 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.226018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:55 crc kubenswrapper[4772]: I1128 11:06:55.226049 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 11:06:56 crc kubenswrapper[4772]: I1128 11:06:56.478951 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:56 crc kubenswrapper[4772]: I1128 11:06:56.479148 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:56 crc kubenswrapper[4772]: I1128 11:06:56.480606 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:56 crc kubenswrapper[4772]: I1128 11:06:56.480667 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:56 crc kubenswrapper[4772]: I1128 11:06:56.480681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:56 crc kubenswrapper[4772]: I1128 11:06:56.484535 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:06:57 crc kubenswrapper[4772]: I1128 11:06:57.088992 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:57 crc kubenswrapper[4772]: I1128 11:06:57.090649 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:57 crc kubenswrapper[4772]: I1128 11:06:57.090705 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:57 crc kubenswrapper[4772]: I1128 11:06:57.090724 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.146963 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.147179 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.148732 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.148771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.148782 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.167772 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.373751 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.397072 4772 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.772890 4772 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.921549 4772 apiserver.go:52] "Watching apiserver" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.926761 4772 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.927037 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.927587 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.927618 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.927844 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.928094 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.928157 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:06:58 crc kubenswrapper[4772]: E1128 11:06:58.928244 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.928340 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:06:58 crc kubenswrapper[4772]: E1128 11:06:58.928441 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:06:58 crc kubenswrapper[4772]: E1128 11:06:58.928532 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.931042 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.931794 4772 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.936012 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.937006 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.937031 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.936996 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.937888 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.938024 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.938190 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.939417 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 11:06:58 crc kubenswrapper[4772]: I1128 11:06:58.980972 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:06:59 crc kubenswrapper[4772]: I1128 11:06:59.000218 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:06:59 crc kubenswrapper[4772]: I1128 11:06:59.015454 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:06:59 crc kubenswrapper[4772]: I1128 11:06:59.030606 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:06:59 crc kubenswrapper[4772]: I1128 11:06:59.047336 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:06:59 crc kubenswrapper[4772]: I1128 11:06:59.061347 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:06:59 crc kubenswrapper[4772]: I1128 11:06:59.072655 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:06:59 crc kubenswrapper[4772]: I1128 11:06:59.073804 4772 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 11:06:59 crc kubenswrapper[4772]: I1128 11:06:59.115191 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.020992 4772 trace.go:236] Trace[2107676068]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 11:06:46.377) (total time: 13643ms): Nov 28 11:07:00 crc kubenswrapper[4772]: Trace[2107676068]: ---"Objects listed" error: 13643ms (11:07:00.020) Nov 28 11:07:00 crc kubenswrapper[4772]: Trace[2107676068]: [13.643192602s] [13.643192602s] END Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.021035 4772 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.025271 4772 trace.go:236] Trace[512724374]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 11:06:45.321) (total time: 14703ms): Nov 28 11:07:00 crc kubenswrapper[4772]: Trace[512724374]: ---"Objects listed" error: 14703ms (11:07:00.025) Nov 28 11:07:00 crc kubenswrapper[4772]: Trace[512724374]: [14.703419203s] [14.703419203s] END Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.025314 4772 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.029808 4772 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.039685 4772 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.039817 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130571 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130620 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130643 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130671 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130693 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130711 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130726 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130742 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130761 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130780 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130794 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130808 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130824 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130841 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130856 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130869 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130885 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130976 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.130997 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131014 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131036 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131056 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131074 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131089 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131111 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131125 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131141 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131157 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131175 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131190 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131211 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131229 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131249 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131264 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131287 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131303 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131319 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131335 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131370 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131390 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131415 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131429 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131445 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131462 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131493 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131509 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131525 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131540 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131558 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131573 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131591 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131609 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131624 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131640 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131654 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131675 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131690 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131712 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131732 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131748 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131764 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131782 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131802 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131818 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131867 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131890 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131913 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131934 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131966 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131982 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.131998 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132014 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132030 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132046 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132065 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132083 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132099 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132115 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132131 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132145 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132148 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132180 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132300 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132331 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132390 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132417 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132440 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132448 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132497 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132475 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132520 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132540 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132576 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132595 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132613 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132658 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132680 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132697 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132734 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132739 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132755 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132766 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132779 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132927 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132950 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.132974 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133147 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133148 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133176 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133232 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133446 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133475 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133636 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133713 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133719 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133734 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133745 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133866 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.133913 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.134044 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.134045 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.134053 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.134078 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.134133 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.134347 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.134391 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.134458 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.134662 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.134780 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.134812 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.135197 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.135266 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.135305 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.135476 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.135685 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.135728 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138200 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138239 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138276 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138300 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138324 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138345 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138382 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138384 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138407 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138435 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138459 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138482 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138501 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138527 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138555 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138583 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138613 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138657 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138691 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138723 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138651 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138753 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138846 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138890 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138956 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138986 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139014 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139042 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139069 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139093 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139120 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139142 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139164 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139189 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139222 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139247 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139341 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139387 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139415 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139439 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139462 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139491 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139517 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139543 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139572 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139601 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139627 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139654 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139683 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139708 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139734 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139756 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139778 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139805 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139830 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139855 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139913 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139939 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139963 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140157 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140196 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140223 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140244 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140262 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140286 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140314 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140340 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140384 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140410 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140438 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140466 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140491 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140514 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140537 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140562 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140586 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140608 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140632 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140656 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140682 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140714 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140740 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140765 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140789 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140813 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140839 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140866 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140891 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140916 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140940 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140963 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140982 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141000 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141017 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141034 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141052 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141072 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141089 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141108 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141157 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141184 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141209 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141229 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141250 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141268 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141288 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141308 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141327 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141344 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141382 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141406 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141425 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141448 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141529 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141542 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141554 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141565 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141575 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141585 4772 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141594 4772 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141608 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141621 4772 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141635 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141650 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141662 4772 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141674 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141686 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141707 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141718 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141728 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141738 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141750 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141761 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141771 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141780 4772 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141791 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141804 4772 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141814 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141823 4772 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141834 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141846 4772 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141856 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141866 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141877 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141887 4772 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141899 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141909 4772 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141919 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141930 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141940 4772 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141950 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141960 4772 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.141969 4772 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.144146 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.138846 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.139068 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140235 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140676 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140811 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140903 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.140925 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.143937 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.145040 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.153129 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.145209 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.145501 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.145592 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.145719 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.145846 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.146085 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.146122 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.146131 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.149646 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.149918 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.150201 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.150329 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.150659 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.153641 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.150991 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.151785 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.151847 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.151662 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.152136 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.152180 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.152173 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.152272 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.152526 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.152632 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.152687 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.153149 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.154059 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.153217 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.153497 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.154073 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.152005 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.154145 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.154164 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.154450 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.154710 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.154733 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.154761 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.154893 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.155074 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:07:00.655044689 +0000 UTC m=+18.978287916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.155085 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.155179 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.155202 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.155282 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.155310 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.155677 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.156086 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.155673 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.155697 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.155752 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.155924 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.156018 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.156159 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:00.656137378 +0000 UTC m=+18.979380815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.156215 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.155970 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.156425 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:00.656391404 +0000 UTC m=+18.979634621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.156813 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.157271 4772 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.158353 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.158433 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.158540 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.158900 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.159624 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.159703 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.160039 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.161426 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.161639 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.161745 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.161132 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.166580 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.166612 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.166670 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.167756 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.168035 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.168624 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.169348 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.170035 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.170046 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.170186 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.169990 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.170662 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.170716 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.171143 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.171466 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.171770 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.172439 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.172525 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.172635 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.172712 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.172915 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.174761 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.175069 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.178889 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.179869 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.179942 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.180295 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.180676 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.184812 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.185734 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.185990 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.186417 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.186749 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.186879 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.186928 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.186943 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.187111 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.186808 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.186809 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.187286 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:00.687261781 +0000 UTC m=+19.010505008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.187487 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.187738 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.187775 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.187793 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.187868 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:00.687843266 +0000 UTC m=+19.011086503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.189481 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.189720 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.189783 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.189855 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.190593 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.191947 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.192204 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.192508 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.192558 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.193021 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.193067 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.193690 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.193842 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.193975 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.194197 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.194515 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.194635 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.194661 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.194773 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.195397 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.196122 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.196556 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.196572 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.196634 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.197265 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.197473 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.197508 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.199235 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.199646 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.201041 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.201173 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.201305 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.201511 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.201589 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.201700 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.201780 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.202285 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54720->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.202299 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54726->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.202351 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54720->192.168.126.11:17697: read: connection reset by peer" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.202413 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54726->192.168.126.11:17697: read: connection reset by peer" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.202464 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.202534 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.202956 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.203146 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.203860 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.203961 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.204287 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.205483 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.205786 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.205819 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.207006 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.207092 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.209586 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.209651 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.211850 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.232734 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.245937 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.245992 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246115 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246132 4772 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246150 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246166 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246190 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246207 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246222 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246239 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246256 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246272 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246289 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246307 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246324 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246343 4772 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246385 4772 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246399 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246411 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246419 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246428 4772 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246436 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246447 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246456 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246465 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246474 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246483 4772 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246492 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246502 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246510 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246519 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246528 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246537 4772 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246547 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246559 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246570 4772 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246578 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246586 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246595 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246603 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246611 4772 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246620 4772 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246628 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246636 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246645 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246654 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246663 4772 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246672 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246681 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246691 4772 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246700 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246710 4772 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246721 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246729 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246738 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246746 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246757 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246766 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246783 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246791 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246801 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246809 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246818 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246826 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246835 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246866 4772 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246879 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246887 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246895 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246905 4772 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246914 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246923 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246932 4772 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246944 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246958 4772 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246970 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246981 4772 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.246992 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247002 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247012 4772 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247020 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247077 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247147 4772 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247304 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247456 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247470 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247480 4772 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247488 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247498 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247506 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247514 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247525 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247533 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247543 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247554 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247566 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247697 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247743 4772 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247762 4772 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247778 4772 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247793 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247807 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247820 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247832 4772 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247845 4772 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247856 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247869 4772 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247881 4772 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247892 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247903 4772 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247914 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247926 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247935 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247946 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247957 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247969 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247980 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.247992 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248004 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248015 4772 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248029 4772 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248040 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248053 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248064 4772 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248076 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248088 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248100 4772 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248112 4772 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248123 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248135 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248157 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248169 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248181 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248192 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248208 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248224 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248238 4772 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248250 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248262 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248272 4772 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248284 4772 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248296 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248314 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248325 4772 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248337 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248352 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248475 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248504 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248516 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248527 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248539 4772 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248551 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248565 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248587 4772 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248597 4772 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.248608 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.249044 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.250269 4772 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.250375 4772 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.251705 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.251734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.251744 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.251764 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.251776 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:00Z","lastTransitionTime":"2025-11-28T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.254891 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.266303 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.272159 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.272214 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.272225 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.272244 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.272256 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:00Z","lastTransitionTime":"2025-11-28T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.285022 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.290604 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.290658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.290673 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.290695 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.290709 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:00Z","lastTransitionTime":"2025-11-28T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.306686 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.311080 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.311116 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.311128 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.311146 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.311157 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:00Z","lastTransitionTime":"2025-11-28T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.321274 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.324598 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.324619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.324627 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.324643 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.324654 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:00Z","lastTransitionTime":"2025-11-28T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.332743 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.332878 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.334240 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.334269 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.334279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.334293 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.334304 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:00Z","lastTransitionTime":"2025-11-28T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.349098 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.349130 4772 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.436595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.436644 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.436653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.436673 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.436684 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:00Z","lastTransitionTime":"2025-11-28T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.450257 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.458376 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.464339 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:07:00 crc kubenswrapper[4772]: W1128 11:07:00.478525 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-27821597abeb6e31136a6130a22cfb442ab6ff68d201b32239900e22f573adbf WatchSource:0}: Error finding container 27821597abeb6e31136a6130a22cfb442ab6ff68d201b32239900e22f573adbf: Status 404 returned error can't find the container with id 27821597abeb6e31136a6130a22cfb442ab6ff68d201b32239900e22f573adbf Nov 28 11:07:00 crc kubenswrapper[4772]: W1128 11:07:00.482896 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-bb35cd5a4a2515a18d5d8fc133f86054b88b6e8610ec124207bf013a0a3077fd WatchSource:0}: Error finding container bb35cd5a4a2515a18d5d8fc133f86054b88b6e8610ec124207bf013a0a3077fd: Status 404 returned error can't find the container with id bb35cd5a4a2515a18d5d8fc133f86054b88b6e8610ec124207bf013a0a3077fd Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.550297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.550704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.550715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.550733 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.550744 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:00Z","lastTransitionTime":"2025-11-28T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.570825 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wgsks"] Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.571165 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wgsks" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.577560 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.578082 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.578218 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.582555 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.582797 4772 csr.go:261] certificate signing request csr-trphs is approved, waiting to be issued Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.601883 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.639766 4772 csr.go:257] certificate signing request csr-trphs is issued Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.652887 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.659775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.659814 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.659826 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.659845 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.659856 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:00Z","lastTransitionTime":"2025-11-28T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.674797 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.694916 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.714922 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.740717 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.753246 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.753400 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753443 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:07:01.753414667 +0000 UTC m=+20.076657894 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.753504 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.753569 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753510 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.753602 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6-host\") pod \"node-ca-wgsks\" (UID: \"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\") " pod="openshift-image-registry/node-ca-wgsks" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.753626 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.753649 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6-serviceca\") pod \"node-ca-wgsks\" (UID: \"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\") " pod="openshift-image-registry/node-ca-wgsks" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.753671 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzp4s\" (UniqueName: \"kubernetes.io/projected/3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6-kube-api-access-dzp4s\") pod \"node-ca-wgsks\" (UID: \"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\") " pod="openshift-image-registry/node-ca-wgsks" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753724 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753747 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753763 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753776 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753731 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:01.753721925 +0000 UTC m=+20.076965152 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753837 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:01.753819278 +0000 UTC m=+20.077062505 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753850 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:01.753844528 +0000 UTC m=+20.077087755 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753891 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753908 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753920 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.753970 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:01.753950841 +0000 UTC m=+20.077194058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.761733 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.761768 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.761777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.761795 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.761810 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:00Z","lastTransitionTime":"2025-11-28T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.762676 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.785822 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.854754 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6-host\") pod \"node-ca-wgsks\" (UID: \"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\") " pod="openshift-image-registry/node-ca-wgsks" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.854819 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6-serviceca\") pod \"node-ca-wgsks\" (UID: \"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\") " pod="openshift-image-registry/node-ca-wgsks" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.854849 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzp4s\" (UniqueName: \"kubernetes.io/projected/3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6-kube-api-access-dzp4s\") pod \"node-ca-wgsks\" (UID: \"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\") " pod="openshift-image-registry/node-ca-wgsks" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.854909 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6-host\") pod \"node-ca-wgsks\" (UID: \"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\") " pod="openshift-image-registry/node-ca-wgsks" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.856067 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6-serviceca\") pod \"node-ca-wgsks\" (UID: \"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\") " pod="openshift-image-registry/node-ca-wgsks" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.864808 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.864860 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.864870 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.864898 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.864913 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:00Z","lastTransitionTime":"2025-11-28T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.876290 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzp4s\" (UniqueName: \"kubernetes.io/projected/3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6-kube-api-access-dzp4s\") pod \"node-ca-wgsks\" (UID: \"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\") " pod="openshift-image-registry/node-ca-wgsks" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.902710 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wgsks" Nov 28 11:07:00 crc kubenswrapper[4772]: W1128 11:07:00.917635 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c8042d2_30a3_4b7a_8a7a_e6e4603b04d6.slice/crio-560c9071a120fb46315d039e7b9ab831dfb738d6768da567423e7cff901dcac5 WatchSource:0}: Error finding container 560c9071a120fb46315d039e7b9ab831dfb738d6768da567423e7cff901dcac5: Status 404 returned error can't find the container with id 560c9071a120fb46315d039e7b9ab831dfb738d6768da567423e7cff901dcac5 Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.968809 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.968858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.968868 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.970980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.971012 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:00Z","lastTransitionTime":"2025-11-28T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.993830 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.993869 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:00 crc kubenswrapper[4772]: I1128 11:07:00.993932 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.993986 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.994139 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:00 crc kubenswrapper[4772]: E1128 11:07:00.994275 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.054177 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mstcz"] Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.054633 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mstcz" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.057315 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.057666 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jx2\" (UniqueName: \"kubernetes.io/projected/7f0ff770-c5af-4fea-a576-9bdceb785c30-kube-api-access-g7jx2\") pod \"node-resolver-mstcz\" (UID: \"7f0ff770-c5af-4fea-a576-9bdceb785c30\") " pod="openshift-dns/node-resolver-mstcz" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.057718 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7f0ff770-c5af-4fea-a576-9bdceb785c30-hosts-file\") pod \"node-resolver-mstcz\" (UID: \"7f0ff770-c5af-4fea-a576-9bdceb785c30\") " pod="openshift-dns/node-resolver-mstcz" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.058231 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.060694 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.073218 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.073255 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.073266 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.073283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.073298 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:01Z","lastTransitionTime":"2025-11-28T11:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.080513 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.092992 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.102332 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.102401 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d2154cacb174547cd6b566648f5fe6d990d59935cc4dd2d188b4c0f9f833f7bb"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.104108 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.106256 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404" exitCode=255 Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.106334 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.107383 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wgsks" event={"ID":"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6","Type":"ContainerStarted","Data":"560c9071a120fb46315d039e7b9ab831dfb738d6768da567423e7cff901dcac5"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.108582 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"27821597abeb6e31136a6130a22cfb442ab6ff68d201b32239900e22f573adbf"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.110220 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.110258 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.110275 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bb35cd5a4a2515a18d5d8fc133f86054b88b6e8610ec124207bf013a0a3077fd"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.112212 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.135287 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.157027 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.158571 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7f0ff770-c5af-4fea-a576-9bdceb785c30-hosts-file\") pod \"node-resolver-mstcz\" (UID: \"7f0ff770-c5af-4fea-a576-9bdceb785c30\") " pod="openshift-dns/node-resolver-mstcz" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.158728 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7jx2\" (UniqueName: \"kubernetes.io/projected/7f0ff770-c5af-4fea-a576-9bdceb785c30-kube-api-access-g7jx2\") pod \"node-resolver-mstcz\" (UID: \"7f0ff770-c5af-4fea-a576-9bdceb785c30\") " pod="openshift-dns/node-resolver-mstcz" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.159573 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7f0ff770-c5af-4fea-a576-9bdceb785c30-hosts-file\") pod \"node-resolver-mstcz\" (UID: \"7f0ff770-c5af-4fea-a576-9bdceb785c30\") " pod="openshift-dns/node-resolver-mstcz" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.163203 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.163618 4772 scope.go:117] "RemoveContainer" containerID="2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.181237 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.192270 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.192308 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.192317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.192333 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.192346 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:01Z","lastTransitionTime":"2025-11-28T11:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.193119 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7jx2\" (UniqueName: \"kubernetes.io/projected/7f0ff770-c5af-4fea-a576-9bdceb785c30-kube-api-access-g7jx2\") pod \"node-resolver-mstcz\" (UID: \"7f0ff770-c5af-4fea-a576-9bdceb785c30\") " pod="openshift-dns/node-resolver-mstcz" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.200278 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.220957 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.241430 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.264111 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.280789 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.296609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.296656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.296669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.296689 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.296702 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:01Z","lastTransitionTime":"2025-11-28T11:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.304353 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.323844 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.353844 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.389313 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.398419 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.398458 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.398469 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.398489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.398499 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:01Z","lastTransitionTime":"2025-11-28T11:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.404228 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.429538 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.452294 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mstcz" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.455145 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-qsnnj"] Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.455531 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.456787 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b7vdn"] Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.457692 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.463648 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xhnbl"] Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.464289 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.464725 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f0ff770_c5af_4fea_a576_9bdceb785c30.slice/crio-84251f891fc99f651734adc6ae47981ba5d2de6143bd1637a5aec9aecf473561 WatchSource:0}: Error finding container 84251f891fc99f651734adc6ae47981ba5d2de6143bd1637a5aec9aecf473561: Status 404 returned error can't find the container with id 84251f891fc99f651734adc6ae47981ba5d2de6143bd1637a5aec9aecf473561 Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.464864 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.464929 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.464971 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.465135 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.466937 4772 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.466981 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.467096 4772 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.467116 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.467191 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.467614 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-zfsjk"] Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.467925 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.468098 4772 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.468122 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.468155 4772 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.468167 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.468203 4772 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.468214 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.468241 4772 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.468253 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.469174 4772 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.469194 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.469255 4772 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.469269 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.469290 4772 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.469301 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.470704 4772 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.470714 4772 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.470734 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.470757 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.470705 4772 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.470798 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.470872 4772 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.470890 4772 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.472671 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.476426 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.506877 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.506920 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.506930 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.506948 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.506960 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:01Z","lastTransitionTime":"2025-11-28T11:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.509977 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.528265 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.547580 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.562784 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-log-socket\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.562827 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a4e5807b-7c14-477e-af8b-1260b997ff17-cni-binary-copy\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.562852 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-slash\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.562877 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-etc-openvswitch\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.562896 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-cni-netd\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.562916 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-multus-socket-dir-parent\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.562937 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-var-lib-cni-bin\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.562975 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-node-log\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.562996 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-env-overrides\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563016 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a4e5807b-7c14-477e-af8b-1260b997ff17-multus-daemon-config\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563034 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-run-multus-certs\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563054 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22265\" (UniqueName: \"kubernetes.io/projected/8e4e32c1-8c60-4972-ae38-a20020b374fe-kube-api-access-22265\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563073 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-run-netns\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563094 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-ovn\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563125 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-cnibin\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563067 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563259 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563338 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-system-cni-dir\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563387 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-os-release\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563415 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-hostroot\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563442 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e4e32c1-8c60-4972-ae38-a20020b374fe-mcd-auth-proxy-config\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563472 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-config\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563523 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-var-lib-cni-multus\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563564 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-kubelet\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563624 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-multus-conf-dir\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563646 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-run-ovn-kubernetes\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.563746 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-systemd\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564004 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-cni-bin\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564033 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-script-lib\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564066 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-run-netns\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564092 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e4e32c1-8c60-4972-ae38-a20020b374fe-proxy-tls\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564141 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-var-lib-kubelet\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564169 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-var-lib-openvswitch\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564205 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87wl\" (UniqueName: \"kubernetes.io/projected/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-kube-api-access-j87wl\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564233 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e4e32c1-8c60-4972-ae38-a20020b374fe-rootfs\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564273 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovn-node-metrics-cert\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564328 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-multus-cni-dir\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564431 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-systemd-units\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564473 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-openvswitch\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564499 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-run-k8s-cni-cncf-io\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564546 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-etc-kubernetes\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.564577 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjn9n\" (UniqueName: \"kubernetes.io/projected/a4e5807b-7c14-477e-af8b-1260b997ff17-kube-api-access-cjn9n\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.582842 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.600276 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.610338 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.610717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.610731 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.610753 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.611051 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:01Z","lastTransitionTime":"2025-11-28T11:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.620432 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.634340 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.641643 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-28 11:02:00 +0000 UTC, rotation deadline is 2026-10-03 20:32:01.556102382 +0000 UTC Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.641718 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7425h24m59.914387376s for next certificate rotation Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.649333 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.663672 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666035 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-cnibin\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666071 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666095 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-system-cni-dir\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666117 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-os-release\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666135 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-hostroot\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666156 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-config\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666176 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e4e32c1-8c60-4972-ae38-a20020b374fe-mcd-auth-proxy-config\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666190 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-system-cni-dir\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666190 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-cnibin\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666203 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8kmp\" (UniqueName: \"kubernetes.io/projected/23af5070-24a6-4bab-a4d4-48539af4f256-kube-api-access-q8kmp\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666262 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-os-release\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666305 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-kubelet\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666242 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666304 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-hostroot\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666336 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-var-lib-cni-multus\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666377 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-var-lib-cni-multus\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666376 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-kubelet\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666395 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-multus-conf-dir\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666423 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-multus-conf-dir\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666461 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-run-ovn-kubernetes\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666487 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/23af5070-24a6-4bab-a4d4-48539af4f256-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666509 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-systemd\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666527 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-cni-bin\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666546 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-script-lib\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666574 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-run-netns\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666594 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e4e32c1-8c60-4972-ae38-a20020b374fe-proxy-tls\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666598 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-cni-bin\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666624 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-var-lib-openvswitch\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666633 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-systemd\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666573 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-run-ovn-kubernetes\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666648 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-var-lib-kubelet\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666674 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-var-lib-kubelet\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666684 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/23af5070-24a6-4bab-a4d4-48539af4f256-cni-binary-copy\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666710 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e4e32c1-8c60-4972-ae38-a20020b374fe-rootfs\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666734 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j87wl\" (UniqueName: \"kubernetes.io/projected/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-kube-api-access-j87wl\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666753 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-multus-cni-dir\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666769 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/23af5070-24a6-4bab-a4d4-48539af4f256-system-cni-dir\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666790 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-var-lib-openvswitch\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666798 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/23af5070-24a6-4bab-a4d4-48539af4f256-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666823 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovn-node-metrics-cert\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666846 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-systemd-units\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666864 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-openvswitch\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666881 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-run-k8s-cni-cncf-io\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666903 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjn9n\" (UniqueName: \"kubernetes.io/projected/a4e5807b-7c14-477e-af8b-1260b997ff17-kube-api-access-cjn9n\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666937 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-etc-kubernetes\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666955 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-log-socket\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666973 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a4e5807b-7c14-477e-af8b-1260b997ff17-cni-binary-copy\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666990 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-slash\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667007 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-etc-openvswitch\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667026 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-cni-netd\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667044 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-multus-socket-dir-parent\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667047 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e4e32c1-8c60-4972-ae38-a20020b374fe-mcd-auth-proxy-config\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667098 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-var-lib-cni-bin\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667071 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-var-lib-cni-bin\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666825 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e4e32c1-8c60-4972-ae38-a20020b374fe-rootfs\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667167 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-multus-cni-dir\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.666904 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-run-netns\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667206 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-run-k8s-cni-cncf-io\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667214 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/23af5070-24a6-4bab-a4d4-48539af4f256-os-release\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667234 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-openvswitch\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667240 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-slash\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667248 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-node-log\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667257 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-cni-netd\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667262 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-etc-kubernetes\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667268 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-env-overrides\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667295 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-node-log\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667300 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-log-socket\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667219 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-systemd-units\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667338 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-multus-socket-dir-parent\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667343 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a4e5807b-7c14-477e-af8b-1260b997ff17-multus-daemon-config\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667381 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-etc-openvswitch\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667428 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-run-multus-certs\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667481 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22265\" (UniqueName: \"kubernetes.io/projected/8e4e32c1-8c60-4972-ae38-a20020b374fe-kube-api-access-22265\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667517 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a4e5807b-7c14-477e-af8b-1260b997ff17-host-run-multus-certs\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667547 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/23af5070-24a6-4bab-a4d4-48539af4f256-cnibin\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667586 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-run-netns\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667621 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-ovn\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667655 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-ovn\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.667696 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-run-netns\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.668059 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a4e5807b-7c14-477e-af8b-1260b997ff17-cni-binary-copy\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.668241 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a4e5807b-7c14-477e-af8b-1260b997ff17-multus-daemon-config\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.686009 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.687040 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjn9n\" (UniqueName: \"kubernetes.io/projected/a4e5807b-7c14-477e-af8b-1260b997ff17-kube-api-access-cjn9n\") pod \"multus-qsnnj\" (UID: \"a4e5807b-7c14-477e-af8b-1260b997ff17\") " pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.706165 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.719323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.719374 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.719386 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.719407 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.719420 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:01Z","lastTransitionTime":"2025-11-28T11:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.721882 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.738531 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.752074 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.768507 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.768644 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.768691 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/23af5070-24a6-4bab-a4d4-48539af4f256-cnibin\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.768717 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8kmp\" (UniqueName: \"kubernetes.io/projected/23af5070-24a6-4bab-a4d4-48539af4f256-kube-api-access-q8kmp\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.768740 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.768763 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.768778 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/23af5070-24a6-4bab-a4d4-48539af4f256-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.768964 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/23af5070-24a6-4bab-a4d4-48539af4f256-cni-binary-copy\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.769007 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/23af5070-24a6-4bab-a4d4-48539af4f256-system-cni-dir\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.769054 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/23af5070-24a6-4bab-a4d4-48539af4f256-cnibin\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.769071 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/23af5070-24a6-4bab-a4d4-48539af4f256-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.769164 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.769212 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/23af5070-24a6-4bab-a4d4-48539af4f256-system-cni-dir\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769227 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:07:03.769183155 +0000 UTC m=+22.092426382 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769307 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.769335 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/23af5070-24a6-4bab-a4d4-48539af4f256-os-release\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769262 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769396 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769416 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:03.769390411 +0000 UTC m=+22.092633848 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769425 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769446 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.769422 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/23af5070-24a6-4bab-a4d4-48539af4f256-os-release\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769448 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:03.769436072 +0000 UTC m=+22.092679539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769508 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:03.769493064 +0000 UTC m=+22.092736461 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769548 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769561 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769568 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:01 crc kubenswrapper[4772]: E1128 11:07:01.769592 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:03.769584166 +0000 UTC m=+22.092827393 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.770187 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/23af5070-24a6-4bab-a4d4-48539af4f256-cni-binary-copy\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.770291 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/23af5070-24a6-4bab-a4d4-48539af4f256-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.779284 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qsnnj" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.793431 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8kmp\" (UniqueName: \"kubernetes.io/projected/23af5070-24a6-4bab-a4d4-48539af4f256-kube-api-access-q8kmp\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.821569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.821687 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.821700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.821717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.821728 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:01Z","lastTransitionTime":"2025-11-28T11:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.845270 4772 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.845641 4772 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.845676 4772 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.845774 4772 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.845818 4772 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.845852 4772 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.845862 4772 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.845988 4772 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.846011 4772 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Nov 28 11:07:01 crc kubenswrapper[4772]: W1128 11:07:01.846085 4772 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.930542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.930587 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.930599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.930618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.930629 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:01Z","lastTransitionTime":"2025-11-28T11:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.998774 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 28 11:07:01 crc kubenswrapper[4772]: I1128 11:07:01.999463 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.001008 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.001815 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.003100 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.003766 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.004549 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.005685 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.006439 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.007719 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.008344 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.009864 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.010493 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.011097 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.012224 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.012853 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.016930 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.017414 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.018209 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.019472 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.020083 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.021309 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.021837 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.022933 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.023344 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.023956 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.025029 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.025550 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.026491 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.026811 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.027049 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.027934 4772 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.028036 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.030795 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.031739 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.032472 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.033875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.034197 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.034209 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.034229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.034241 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:02Z","lastTransitionTime":"2025-11-28T11:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.035728 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.036422 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.037299 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.038024 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.039149 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.039666 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.040641 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.041602 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.042620 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.043113 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.044009 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.044652 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.045728 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.046218 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.047056 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.047566 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.048456 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.049142 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.049642 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.051904 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.067700 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.083249 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.100687 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.114804 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mstcz" event={"ID":"7f0ff770-c5af-4fea-a576-9bdceb785c30","Type":"ContainerStarted","Data":"e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.114869 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mstcz" event={"ID":"7f0ff770-c5af-4fea-a576-9bdceb785c30","Type":"ContainerStarted","Data":"84251f891fc99f651734adc6ae47981ba5d2de6143bd1637a5aec9aecf473561"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.115435 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.116298 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wgsks" event={"ID":"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6","Type":"ContainerStarted","Data":"537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.118473 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.120207 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.120467 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.122049 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qsnnj" event={"ID":"a4e5807b-7c14-477e-af8b-1260b997ff17","Type":"ContainerStarted","Data":"a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.122076 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qsnnj" event={"ID":"a4e5807b-7c14-477e-af8b-1260b997ff17","Type":"ContainerStarted","Data":"0393d491cd6f55e3d77ec6dab54c77239a0ac9d251fbc603ba002495dd907eed"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.128560 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.137452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.137492 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.137503 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.137521 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.137532 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:02Z","lastTransitionTime":"2025-11-28T11:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.141470 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.156644 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.181813 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.198748 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.224090 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.240597 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.240661 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.240677 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.240709 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.240723 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:02Z","lastTransitionTime":"2025-11-28T11:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.249643 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.284636 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.299762 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.317656 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.333534 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.338844 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.343200 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.343228 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.343238 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.343255 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.343266 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:02Z","lastTransitionTime":"2025-11-28T11:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.348983 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.361469 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.369639 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.380422 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/23af5070-24a6-4bab-a4d4-48539af4f256-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xhnbl\" (UID: \"23af5070-24a6-4bab-a4d4-48539af4f256\") " pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.390770 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.412917 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.421270 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.424139 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.435784 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: W1128 11:07:02.443169 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23af5070_24a6_4bab_a4d4_48539af4f256.slice/crio-24213fb17915a745f78d531e54efca3ae7a020657fd5b5455b34a2ea58ccc474 WatchSource:0}: Error finding container 24213fb17915a745f78d531e54efca3ae7a020657fd5b5455b34a2ea58ccc474: Status 404 returned error can't find the container with id 24213fb17915a745f78d531e54efca3ae7a020657fd5b5455b34a2ea58ccc474 Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.445253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.445300 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.445314 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.445334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.445348 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:02Z","lastTransitionTime":"2025-11-28T11:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.454417 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.458413 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-config\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.468653 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.486229 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.491399 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.492663 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovn-node-metrics-cert\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.534263 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.550476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.550512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.550522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.550538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.550550 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:02Z","lastTransitionTime":"2025-11-28T11:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.551954 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.576461 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.577934 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-env-overrides\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.596445 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.638232 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.652924 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.653497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.653535 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.653562 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.653584 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:02Z","lastTransitionTime":"2025-11-28T11:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:02 crc kubenswrapper[4772]: E1128 11:07:02.666887 4772 secret.go:188] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Nov 28 11:07:02 crc kubenswrapper[4772]: E1128 11:07:02.666994 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e4e32c1-8c60-4972-ae38-a20020b374fe-proxy-tls podName:8e4e32c1-8c60-4972-ae38-a20020b374fe nodeName:}" failed. No retries permitted until 2025-11-28 11:07:03.166970123 +0000 UTC m=+21.490213350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8e4e32c1-8c60-4972-ae38-a20020b374fe-proxy-tls") pod "machine-config-daemon-zfsjk" (UID: "8e4e32c1-8c60-4972-ae38-a20020b374fe") : failed to sync secret cache: timed out waiting for the condition Nov 28 11:07:02 crc kubenswrapper[4772]: E1128 11:07:02.667101 4772 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Nov 28 11:07:02 crc kubenswrapper[4772]: E1128 11:07:02.667216 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-script-lib podName:52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a nodeName:}" failed. No retries permitted until 2025-11-28 11:07:03.167189659 +0000 UTC m=+21.490432886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-script-lib") pod "ovnkube-node-b7vdn" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a") : failed to sync configmap cache: timed out waiting for the condition Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.673294 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 11:07:02 crc kubenswrapper[4772]: E1128 11:07:02.681851 4772 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.705607 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 11:07:02 crc kubenswrapper[4772]: E1128 11:07:02.712254 4772 projected.go:194] Error preparing data for projected volume kube-api-access-j87wl for pod openshift-ovn-kubernetes/ovnkube-node-b7vdn: failed to sync configmap cache: timed out waiting for the condition Nov 28 11:07:02 crc kubenswrapper[4772]: E1128 11:07:02.712379 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-kube-api-access-j87wl podName:52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a nodeName:}" failed. No retries permitted until 2025-11-28 11:07:03.212339378 +0000 UTC m=+21.535582605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j87wl" (UniqueName: "kubernetes.io/projected/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-kube-api-access-j87wl") pod "ovnkube-node-b7vdn" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a") : failed to sync configmap cache: timed out waiting for the condition Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.756646 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.756692 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.756702 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.756721 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.756732 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:02Z","lastTransitionTime":"2025-11-28T11:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.762044 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.789531 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.797154 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22265\" (UniqueName: \"kubernetes.io/projected/8e4e32c1-8c60-4972-ae38-a20020b374fe-kube-api-access-22265\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.799240 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.857799 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.859893 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.859949 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.859967 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.859991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.860010 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:02Z","lastTransitionTime":"2025-11-28T11:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.921870 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.926899 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.938071 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.945473 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.963186 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.963242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.963257 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.963279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.963298 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:02Z","lastTransitionTime":"2025-11-28T11:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.963961 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.988820 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.993449 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.993447 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:02 crc kubenswrapper[4772]: E1128 11:07:02.993686 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:02 crc kubenswrapper[4772]: I1128 11:07:02.993464 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:02 crc kubenswrapper[4772]: E1128 11:07:02.993989 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:02 crc kubenswrapper[4772]: E1128 11:07:02.994159 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.054250 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.067397 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.077435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.077476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.077485 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.077499 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.077507 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:03Z","lastTransitionTime":"2025-11-28T11:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.113728 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.128438 4772 generic.go:334] "Generic (PLEG): container finished" podID="23af5070-24a6-4bab-a4d4-48539af4f256" containerID="e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d" exitCode=0 Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.128510 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" event={"ID":"23af5070-24a6-4bab-a4d4-48539af4f256","Type":"ContainerDied","Data":"e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d"} Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.128562 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" event={"ID":"23af5070-24a6-4bab-a4d4-48539af4f256","Type":"ContainerStarted","Data":"24213fb17915a745f78d531e54efca3ae7a020657fd5b5455b34a2ea58ccc474"} Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.162585 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.179912 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.180134 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.180198 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.180297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.180375 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:03Z","lastTransitionTime":"2025-11-28T11:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.185430 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.188141 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-script-lib\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.188179 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e4e32c1-8c60-4972-ae38-a20020b374fe-proxy-tls\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.188915 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-script-lib\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.193886 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e4e32c1-8c60-4972-ae38-a20020b374fe-proxy-tls\") pod \"machine-config-daemon-zfsjk\" (UID: \"8e4e32c1-8c60-4972-ae38-a20020b374fe\") " pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.197434 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.208275 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.209024 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.230065 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.249279 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.266041 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.281349 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.283203 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.283263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.283274 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.283312 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.283326 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:03Z","lastTransitionTime":"2025-11-28T11:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.289595 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j87wl\" (UniqueName: \"kubernetes.io/projected/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-kube-api-access-j87wl\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.293077 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j87wl\" (UniqueName: \"kubernetes.io/projected/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-kube-api-access-j87wl\") pod \"ovnkube-node-b7vdn\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.293338 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.293745 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.307465 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: W1128 11:07:03.308334 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52f8704c_e8fc_4a0e_bfd7_94d78ee6f09a.slice/crio-28841ba68f77615df84f63141d03539694a1af2a72e0eafbf151ad7573ac556a WatchSource:0}: Error finding container 28841ba68f77615df84f63141d03539694a1af2a72e0eafbf151ad7573ac556a: Status 404 returned error can't find the container with id 28841ba68f77615df84f63141d03539694a1af2a72e0eafbf151ad7573ac556a Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.313063 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.321084 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.339435 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: W1128 11:07:03.340404 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e4e32c1_8c60_4972_ae38_a20020b374fe.slice/crio-0254d91f55aa83706729e3109d3ba71d7eb78da30d3233d50b6dd2baf84e3c22 WatchSource:0}: Error finding container 0254d91f55aa83706729e3109d3ba71d7eb78da30d3233d50b6dd2baf84e3c22: Status 404 returned error can't find the container with id 0254d91f55aa83706729e3109d3ba71d7eb78da30d3233d50b6dd2baf84e3c22 Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.355087 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.369202 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:03Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.389312 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.389385 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.389398 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.389418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.389432 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:03Z","lastTransitionTime":"2025-11-28T11:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.492800 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.492857 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.492869 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.492890 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.492904 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:03Z","lastTransitionTime":"2025-11-28T11:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.595895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.595952 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.595965 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.595989 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.596002 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:03Z","lastTransitionTime":"2025-11-28T11:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.699523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.699568 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.699583 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.699602 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.699614 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:03Z","lastTransitionTime":"2025-11-28T11:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.793905 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.794092 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.794130 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.794159 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.794177 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.794279 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.794334 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:07.794321527 +0000 UTC m=+26.117564754 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.794744 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:07:07.794735298 +0000 UTC m=+26.117978525 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.794788 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.794818 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:07.79480946 +0000 UTC m=+26.118052687 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.794876 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.794895 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.794908 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.794936 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:07.794927453 +0000 UTC m=+26.118170680 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.794982 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.794996 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.795005 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:03 crc kubenswrapper[4772]: E1128 11:07:03.795039 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:07.795029556 +0000 UTC m=+26.118272783 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.803408 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.803477 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.803496 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.803527 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.803547 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:03Z","lastTransitionTime":"2025-11-28T11:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.906177 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.906604 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.906617 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.906644 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:03 crc kubenswrapper[4772]: I1128 11:07:03.906658 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:03Z","lastTransitionTime":"2025-11-28T11:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.010703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.010756 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.010778 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.010803 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.010823 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:04Z","lastTransitionTime":"2025-11-28T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.118870 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.118910 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.118919 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.118934 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.118943 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:04Z","lastTransitionTime":"2025-11-28T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.136944 4772 generic.go:334] "Generic (PLEG): container finished" podID="23af5070-24a6-4bab-a4d4-48539af4f256" containerID="8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a" exitCode=0 Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.137065 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" event={"ID":"23af5070-24a6-4bab-a4d4-48539af4f256","Type":"ContainerDied","Data":"8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.142780 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.142841 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.142850 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"0254d91f55aa83706729e3109d3ba71d7eb78da30d3233d50b6dd2baf84e3c22"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.147446 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1" exitCode=0 Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.147556 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.147597 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"28841ba68f77615df84f63141d03539694a1af2a72e0eafbf151ad7573ac556a"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.150742 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.171824 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.188919 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.203380 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.225667 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.225723 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.225733 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.225755 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.225767 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:04Z","lastTransitionTime":"2025-11-28T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.239199 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.255489 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.271933 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.285989 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.298557 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.318003 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.328528 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.328563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.328575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.328592 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.328602 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:04Z","lastTransitionTime":"2025-11-28T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.336449 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.353920 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.368586 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.385122 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.406812 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.429157 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.444533 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.444577 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.444586 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.444607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.444618 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:04Z","lastTransitionTime":"2025-11-28T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.448894 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.492570 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.530975 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.554790 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.554846 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.554860 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.554881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.554891 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:04Z","lastTransitionTime":"2025-11-28T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.554969 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.576836 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.591063 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.605211 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.617050 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.631986 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.655828 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.658543 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.658575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.658586 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.658604 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.658614 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:04Z","lastTransitionTime":"2025-11-28T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.672326 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.685674 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.695938 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.761112 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.761151 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.761160 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.761176 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.761184 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:04Z","lastTransitionTime":"2025-11-28T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.863856 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.864122 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.864234 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.864325 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.864436 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:04Z","lastTransitionTime":"2025-11-28T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.966315 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.966416 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.966440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.966466 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.966480 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:04Z","lastTransitionTime":"2025-11-28T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.993605 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.993673 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:04 crc kubenswrapper[4772]: I1128 11:07:04.993605 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:04 crc kubenswrapper[4772]: E1128 11:07:04.993752 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:04 crc kubenswrapper[4772]: E1128 11:07:04.993855 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:04 crc kubenswrapper[4772]: E1128 11:07:04.993942 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.070008 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.070063 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.070074 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.070096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.070110 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:05Z","lastTransitionTime":"2025-11-28T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.161215 4772 generic.go:334] "Generic (PLEG): container finished" podID="23af5070-24a6-4bab-a4d4-48539af4f256" containerID="43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3" exitCode=0 Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.161320 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" event={"ID":"23af5070-24a6-4bab-a4d4-48539af4f256","Type":"ContainerDied","Data":"43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.166903 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.166966 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.166982 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.166995 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.167005 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.167070 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.172229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.172298 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.172318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.172348 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.172404 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:05Z","lastTransitionTime":"2025-11-28T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.180812 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.202323 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.226843 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.242780 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.257666 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.271788 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.275138 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.275164 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.275175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.275193 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.275204 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:05Z","lastTransitionTime":"2025-11-28T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.284451 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.304351 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.321317 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.340122 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.355288 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.369799 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.377542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.377576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.377585 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.377601 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.377615 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:05Z","lastTransitionTime":"2025-11-28T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.384833 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.398436 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:05Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.480216 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.480273 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.480291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.480319 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.480338 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:05Z","lastTransitionTime":"2025-11-28T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.583230 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.583277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.583286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.583301 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.583312 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:05Z","lastTransitionTime":"2025-11-28T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.686423 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.686465 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.686475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.686493 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.686505 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:05Z","lastTransitionTime":"2025-11-28T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.789308 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.789391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.789405 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.789431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.789447 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:05Z","lastTransitionTime":"2025-11-28T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.892297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.892339 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.892350 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.892387 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.892403 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:05Z","lastTransitionTime":"2025-11-28T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.994747 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.994785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.994795 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.994807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:05 crc kubenswrapper[4772]: I1128 11:07:05.994816 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:05Z","lastTransitionTime":"2025-11-28T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.097784 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.097820 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.097828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.097841 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.097852 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:06Z","lastTransitionTime":"2025-11-28T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.176169 4772 generic.go:334] "Generic (PLEG): container finished" podID="23af5070-24a6-4bab-a4d4-48539af4f256" containerID="5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734" exitCode=0 Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.176255 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" event={"ID":"23af5070-24a6-4bab-a4d4-48539af4f256","Type":"ContainerDied","Data":"5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734"} Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.200919 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.200971 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.200985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.201009 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.201023 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:06Z","lastTransitionTime":"2025-11-28T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.203791 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.221680 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.240247 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.254069 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.271041 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.291031 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.307738 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.307775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.307785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.307800 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.307810 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:06Z","lastTransitionTime":"2025-11-28T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.308733 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.327995 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.366104 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.383277 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.404391 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.410563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.410604 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.410616 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.410633 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.410645 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:06Z","lastTransitionTime":"2025-11-28T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.416117 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.426478 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.447971 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:06Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.513903 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.513939 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.513946 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.513964 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.513973 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:06Z","lastTransitionTime":"2025-11-28T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.617677 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.617718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.617730 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.617757 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.617984 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:06Z","lastTransitionTime":"2025-11-28T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.721720 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.721771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.721781 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.721799 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.721811 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:06Z","lastTransitionTime":"2025-11-28T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.825617 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.825698 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.825722 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.825757 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.825785 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:06Z","lastTransitionTime":"2025-11-28T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.929220 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.929292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.929318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.929349 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.929403 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:06Z","lastTransitionTime":"2025-11-28T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.993535 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:06 crc kubenswrapper[4772]: E1128 11:07:06.993734 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.993732 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:06 crc kubenswrapper[4772]: I1128 11:07:06.993798 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:06 crc kubenswrapper[4772]: E1128 11:07:06.993852 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:06 crc kubenswrapper[4772]: E1128 11:07:06.994055 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.033121 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.033199 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.033216 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.033246 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.033263 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:07Z","lastTransitionTime":"2025-11-28T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.044698 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.050551 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.058588 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.071919 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.088220 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.105867 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.122924 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.135863 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.135938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.135956 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.135988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.136008 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:07Z","lastTransitionTime":"2025-11-28T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.136996 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.157459 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.169758 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.185643 4772 generic.go:334] "Generic (PLEG): container finished" podID="23af5070-24a6-4bab-a4d4-48539af4f256" containerID="2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1" exitCode=0 Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.185715 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" event={"ID":"23af5070-24a6-4bab-a4d4-48539af4f256","Type":"ContainerDied","Data":"2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1"} Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.192045 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a"} Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.197829 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.215121 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.233244 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.238776 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.239004 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.239076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.239160 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.239272 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:07Z","lastTransitionTime":"2025-11-28T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.250972 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.270026 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.291303 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.306871 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.327230 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.343182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.343256 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.343270 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.343296 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.343309 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:07Z","lastTransitionTime":"2025-11-28T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.344437 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.362920 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.378421 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.397590 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.413885 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.434399 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.446474 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.446538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.446553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.446576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.446592 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:07Z","lastTransitionTime":"2025-11-28T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.449956 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.466634 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.479135 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.492152 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.505807 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.517954 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.542010 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.549021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.549077 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.549094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.549118 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.549132 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:07Z","lastTransitionTime":"2025-11-28T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.564190 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:07Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.652337 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.652433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.652444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.652469 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.652479 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:07Z","lastTransitionTime":"2025-11-28T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.755211 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.755269 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.755292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.755318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.755336 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:07Z","lastTransitionTime":"2025-11-28T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.838099 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.838211 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.838256 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.838280 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.838312 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.838437 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.838506 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:15.83848517 +0000 UTC m=+34.161728417 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.839130 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.839165 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.839184 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.839287 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.839304 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.839317 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.839409 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.839462 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:07:15.838946832 +0000 UTC m=+34.162190079 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.839513 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:15.839502466 +0000 UTC m=+34.162745713 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.839587 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:15.839522217 +0000 UTC m=+34.162765454 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:07 crc kubenswrapper[4772]: E1128 11:07:07.839621 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:15.839604439 +0000 UTC m=+34.162847686 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.859512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.859570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.859591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.859619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.859643 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:07Z","lastTransitionTime":"2025-11-28T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.962881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.962919 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.962937 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.962957 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:07 crc kubenswrapper[4772]: I1128 11:07:07.962972 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:07Z","lastTransitionTime":"2025-11-28T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.066448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.066514 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.066532 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.066554 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.066571 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:08Z","lastTransitionTime":"2025-11-28T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.170154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.170725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.170746 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.170779 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.170800 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:08Z","lastTransitionTime":"2025-11-28T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.199496 4772 generic.go:334] "Generic (PLEG): container finished" podID="23af5070-24a6-4bab-a4d4-48539af4f256" containerID="71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83" exitCode=0 Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.199706 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" event={"ID":"23af5070-24a6-4bab-a4d4-48539af4f256","Type":"ContainerDied","Data":"71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83"} Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.221969 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.239768 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.258008 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.275737 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.275832 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.275850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.275880 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.275902 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:08Z","lastTransitionTime":"2025-11-28T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.292857 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.311491 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.330394 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.347689 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.366584 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.379299 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.379373 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.379388 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.379414 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.379428 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:08Z","lastTransitionTime":"2025-11-28T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.396689 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.413964 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.429699 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.451577 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.468753 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.482557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.482599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.482610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.482625 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.482636 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:08Z","lastTransitionTime":"2025-11-28T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.484740 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.497547 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.585772 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.585868 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.585883 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.585909 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.585927 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:08Z","lastTransitionTime":"2025-11-28T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.689234 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.689317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.689337 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.689399 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.689424 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:08Z","lastTransitionTime":"2025-11-28T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.793559 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.793624 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.793642 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.793670 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.793689 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:08Z","lastTransitionTime":"2025-11-28T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.898523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.898599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.898626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.898664 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.898690 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:08Z","lastTransitionTime":"2025-11-28T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.993793 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.993840 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:08 crc kubenswrapper[4772]: E1128 11:07:08.993962 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:08 crc kubenswrapper[4772]: I1128 11:07:08.993974 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:08 crc kubenswrapper[4772]: E1128 11:07:08.994106 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:08 crc kubenswrapper[4772]: E1128 11:07:08.994220 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.001768 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.001817 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.001832 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.001852 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.001867 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:09Z","lastTransitionTime":"2025-11-28T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.114864 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.114941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.114960 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.114992 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.115013 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:09Z","lastTransitionTime":"2025-11-28T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.211656 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" event={"ID":"23af5070-24a6-4bab-a4d4-48539af4f256","Type":"ContainerStarted","Data":"3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2"} Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.217764 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.217828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.217845 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.217873 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.217892 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:09Z","lastTransitionTime":"2025-11-28T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.238056 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.259641 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.282330 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.302344 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.320738 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.321629 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.321680 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.321693 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.321717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.321733 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:09Z","lastTransitionTime":"2025-11-28T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.344430 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.380180 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.399426 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.422770 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.426001 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.426049 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.426061 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.426082 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.426096 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:09Z","lastTransitionTime":"2025-11-28T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.441986 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.458412 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.475673 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.490211 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.514133 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.529052 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.529091 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.529103 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.529119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.529133 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:09Z","lastTransitionTime":"2025-11-28T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.537759 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.632640 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.632688 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.632697 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.632713 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.632726 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:09Z","lastTransitionTime":"2025-11-28T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.735864 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.735905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.735917 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.735938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.735950 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:09Z","lastTransitionTime":"2025-11-28T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.839323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.839427 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.839444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.839466 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.839482 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:09Z","lastTransitionTime":"2025-11-28T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.942255 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.942329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.942344 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.942393 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:09 crc kubenswrapper[4772]: I1128 11:07:09.942408 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:09Z","lastTransitionTime":"2025-11-28T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.045961 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.046023 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.046037 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.046059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.046077 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.149452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.149897 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.150083 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.150297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.150567 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.223576 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.224312 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.224505 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.245523 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.254177 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.254232 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.254250 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.254278 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.254297 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.264587 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.264989 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.266317 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.305809 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.326380 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.342777 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.358319 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.358457 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.358473 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.358494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.358509 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.364966 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.381177 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.397156 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.418630 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.436873 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.461979 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.462058 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.462079 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.462107 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.462126 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.477007 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.500833 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.517943 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.531462 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.545857 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.564793 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.565911 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.565985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.566008 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.566044 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.566068 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.601581 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.628441 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.652733 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.659429 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.659515 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.659542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.659582 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.659605 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.670946 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: E1128 11:07:10.678191 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.683538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.683591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.683600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.683618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.683631 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.688883 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: E1128 11:07:10.702118 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.707684 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.707758 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.707782 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.707812 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.707835 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.708777 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.722452 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: E1128 11:07:10.725196 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.729905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.729974 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.729995 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.730028 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.730053 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.746025 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: E1128 11:07:10.749170 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.754440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.754531 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.754553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.754585 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.754603 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.764601 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: E1128 11:07:10.775562 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: E1128 11:07:10.775866 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.778341 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.778421 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.778455 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.778480 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.778494 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.781698 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.801305 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.818955 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.834388 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.855690 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.882173 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.882229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.882241 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.882260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.882272 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.985922 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.985982 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.985997 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.986026 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.986043 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:10Z","lastTransitionTime":"2025-11-28T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.994342 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.994413 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:10 crc kubenswrapper[4772]: I1128 11:07:10.994446 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:10 crc kubenswrapper[4772]: E1128 11:07:10.995118 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:10 crc kubenswrapper[4772]: E1128 11:07:10.995284 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:10 crc kubenswrapper[4772]: E1128 11:07:10.995543 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.089175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.089619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.089743 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.089855 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.089970 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:11Z","lastTransitionTime":"2025-11-28T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.210088 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.210182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.210203 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.210239 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.210262 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:11Z","lastTransitionTime":"2025-11-28T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.226869 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.313120 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.313171 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.313183 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.313205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.313219 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:11Z","lastTransitionTime":"2025-11-28T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.416539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.416612 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.416637 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.416674 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.416696 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:11Z","lastTransitionTime":"2025-11-28T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.519867 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.519928 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.519942 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.519967 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.519985 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:11Z","lastTransitionTime":"2025-11-28T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.622522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.622587 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.622599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.622618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.622636 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:11Z","lastTransitionTime":"2025-11-28T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.726147 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.726223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.726236 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.726260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.726271 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:11Z","lastTransitionTime":"2025-11-28T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.832079 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.832139 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.832161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.832189 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.832211 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:11Z","lastTransitionTime":"2025-11-28T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.935452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.935512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.935530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.935562 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:11 crc kubenswrapper[4772]: I1128 11:07:11.935581 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:11Z","lastTransitionTime":"2025-11-28T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.017274 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.034419 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.038965 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.039025 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.039040 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.039078 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.039098 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:12Z","lastTransitionTime":"2025-11-28T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.055070 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.077392 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.101783 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.118012 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.136043 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.141836 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.141933 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.141950 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.141971 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.142029 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:12Z","lastTransitionTime":"2025-11-28T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.157171 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.174176 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.188876 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.220555 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.232988 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/0.log" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.237628 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2" exitCode=1 Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.237679 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2"} Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.238949 4772 scope.go:117] "RemoveContainer" containerID="d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.244210 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.244240 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.244257 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.244282 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.244303 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:12Z","lastTransitionTime":"2025-11-28T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.249233 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.270733 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.286581 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.301083 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.317233 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.342640 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.347653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.347710 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.347732 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.347769 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.347792 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:12Z","lastTransitionTime":"2025-11-28T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.363857 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.381642 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.397761 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.415804 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.433584 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.447213 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.450502 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.450561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.450580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.450615 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.450643 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:12Z","lastTransitionTime":"2025-11-28T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.484461 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:11Z\\\",\\\"message\\\":\\\".681236 6144 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:07:11.681497 6144 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:07:11.682336 6144 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 11:07:11.682406 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 11:07:11.682411 6144 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 11:07:11.682462 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:07:11.682581 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 11:07:11.682591 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 11:07:11.682620 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:07:11.682628 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:07:11.682662 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:07:11.682685 6144 factory.go:656] Stopping watch factory\\\\nI1128 11:07:11.682687 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:07:11.682702 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:07:11.682711 6144 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.519654 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.539775 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.555509 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.555584 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.555615 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.555652 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.555695 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:12Z","lastTransitionTime":"2025-11-28T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.575208 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.596298 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.619430 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.643499 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.660278 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.660324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.660337 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.660381 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.660397 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:12Z","lastTransitionTime":"2025-11-28T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.764234 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.764285 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.764297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.764315 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.764329 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:12Z","lastTransitionTime":"2025-11-28T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.804869 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.827315 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.845936 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.866815 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.866870 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.866888 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.866911 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.866924 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:12Z","lastTransitionTime":"2025-11-28T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.871173 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:11Z\\\",\\\"message\\\":\\\".681236 6144 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:07:11.681497 6144 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:07:11.682336 6144 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 11:07:11.682406 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 11:07:11.682411 6144 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 11:07:11.682462 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:07:11.682581 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 11:07:11.682591 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 11:07:11.682620 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:07:11.682628 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:07:11.682662 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:07:11.682685 6144 factory.go:656] Stopping watch factory\\\\nI1128 11:07:11.682687 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:07:11.682702 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:07:11.682711 6144 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.889099 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.904408 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.919576 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.936617 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.955582 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.969396 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.969460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.969474 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.969533 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.969553 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:12Z","lastTransitionTime":"2025-11-28T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.978390 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.993967 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.994115 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:12 crc kubenswrapper[4772]: E1128 11:07:12.994134 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:12 crc kubenswrapper[4772]: E1128 11:07:12.994292 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.994451 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:12 crc kubenswrapper[4772]: E1128 11:07:12.994608 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:12 crc kubenswrapper[4772]: I1128 11:07:12.995658 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.018590 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.039714 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.068769 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.073102 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.073148 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.073164 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.073188 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.073206 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:13Z","lastTransitionTime":"2025-11-28T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.083939 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.098271 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.176397 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.176458 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.176470 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.176495 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.176509 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:13Z","lastTransitionTime":"2025-11-28T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.245241 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/0.log" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.249608 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f"} Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.249788 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.271792 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.279539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.279599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.279618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.279648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.279675 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:13Z","lastTransitionTime":"2025-11-28T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.295286 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.315269 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.331527 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.346537 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.360476 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.377575 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.382973 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.383016 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.383026 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.383046 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.383061 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:13Z","lastTransitionTime":"2025-11-28T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.396674 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.424257 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.447409 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.465597 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.483261 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.486479 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.486551 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.486575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.486609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.486634 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:13Z","lastTransitionTime":"2025-11-28T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.501426 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.540413 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:11Z\\\",\\\"message\\\":\\\".681236 6144 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:07:11.681497 6144 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:07:11.682336 6144 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 11:07:11.682406 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 11:07:11.682411 6144 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 11:07:11.682462 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:07:11.682581 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 11:07:11.682591 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 11:07:11.682620 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:07:11.682628 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:07:11.682662 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:07:11.682685 6144 factory.go:656] Stopping watch factory\\\\nI1128 11:07:11.682687 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:07:11.682702 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:07:11.682711 6144 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.566079 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.589904 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.589954 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.589967 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.589988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.590001 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:13Z","lastTransitionTime":"2025-11-28T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.693144 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.693196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.693210 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.693235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.693248 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:13Z","lastTransitionTime":"2025-11-28T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.796816 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.796897 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.796924 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.797002 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.797033 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:13Z","lastTransitionTime":"2025-11-28T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.900854 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.900961 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.900982 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.901012 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:13 crc kubenswrapper[4772]: I1128 11:07:13.901032 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:13Z","lastTransitionTime":"2025-11-28T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.003606 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.003685 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.003705 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.003728 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.003748 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:14Z","lastTransitionTime":"2025-11-28T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.075914 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks"] Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.076563 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.081267 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.082013 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.105892 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.108079 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.108141 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.108160 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.108183 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.108195 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:14Z","lastTransitionTime":"2025-11-28T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.128326 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.155559 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsmg6\" (UniqueName: \"kubernetes.io/projected/b33ea4b3-c282-4391-8da3-3a499a23bb16-kube-api-access-wsmg6\") pod \"ovnkube-control-plane-749d76644c-jrkks\" (UID: \"b33ea4b3-c282-4391-8da3-3a499a23bb16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.155694 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b33ea4b3-c282-4391-8da3-3a499a23bb16-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jrkks\" (UID: \"b33ea4b3-c282-4391-8da3-3a499a23bb16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.155758 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b33ea4b3-c282-4391-8da3-3a499a23bb16-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jrkks\" (UID: \"b33ea4b3-c282-4391-8da3-3a499a23bb16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.155847 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b33ea4b3-c282-4391-8da3-3a499a23bb16-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jrkks\" (UID: \"b33ea4b3-c282-4391-8da3-3a499a23bb16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.158700 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.184053 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.212252 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.212473 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.212551 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.212570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.212605 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.212637 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:14Z","lastTransitionTime":"2025-11-28T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.235737 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.254650 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.256635 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsmg6\" (UniqueName: \"kubernetes.io/projected/b33ea4b3-c282-4391-8da3-3a499a23bb16-kube-api-access-wsmg6\") pod \"ovnkube-control-plane-749d76644c-jrkks\" (UID: \"b33ea4b3-c282-4391-8da3-3a499a23bb16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.256786 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b33ea4b3-c282-4391-8da3-3a499a23bb16-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jrkks\" (UID: \"b33ea4b3-c282-4391-8da3-3a499a23bb16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.256841 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b33ea4b3-c282-4391-8da3-3a499a23bb16-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jrkks\" (UID: \"b33ea4b3-c282-4391-8da3-3a499a23bb16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.256948 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b33ea4b3-c282-4391-8da3-3a499a23bb16-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jrkks\" (UID: \"b33ea4b3-c282-4391-8da3-3a499a23bb16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.258052 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/1.log" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.258250 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b33ea4b3-c282-4391-8da3-3a499a23bb16-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jrkks\" (UID: \"b33ea4b3-c282-4391-8da3-3a499a23bb16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.258581 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b33ea4b3-c282-4391-8da3-3a499a23bb16-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jrkks\" (UID: \"b33ea4b3-c282-4391-8da3-3a499a23bb16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.258931 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/0.log" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.265210 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f" exitCode=1 Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.265327 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f"} Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.265518 4772 scope.go:117] "RemoveContainer" containerID="d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.266259 4772 scope.go:117] "RemoveContainer" containerID="f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f" Nov 28 11:07:14 crc kubenswrapper[4772]: E1128 11:07:14.266521 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.266644 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b33ea4b3-c282-4391-8da3-3a499a23bb16-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jrkks\" (UID: \"b33ea4b3-c282-4391-8da3-3a499a23bb16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.280929 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.290926 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsmg6\" (UniqueName: \"kubernetes.io/projected/b33ea4b3-c282-4391-8da3-3a499a23bb16-kube-api-access-wsmg6\") pod \"ovnkube-control-plane-749d76644c-jrkks\" (UID: \"b33ea4b3-c282-4391-8da3-3a499a23bb16\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.316032 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.316083 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.316096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.316123 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.316137 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:14Z","lastTransitionTime":"2025-11-28T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.317025 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.336572 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.355165 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.372406 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.396328 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.403329 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.414974 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.419297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.419403 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.419429 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.419456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.419473 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:14Z","lastTransitionTime":"2025-11-28T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:14 crc kubenswrapper[4772]: W1128 11:07:14.427977 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb33ea4b3_c282_4391_8da3_3a499a23bb16.slice/crio-a833e9302bf05fbbd0101260e933bff67a0026bcb299eaddd8ac95975bff339d WatchSource:0}: Error finding container a833e9302bf05fbbd0101260e933bff67a0026bcb299eaddd8ac95975bff339d: Status 404 returned error can't find the container with id a833e9302bf05fbbd0101260e933bff67a0026bcb299eaddd8ac95975bff339d Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.451319 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:11Z\\\",\\\"message\\\":\\\".681236 6144 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:07:11.681497 6144 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:07:11.682336 6144 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 11:07:11.682406 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 11:07:11.682411 6144 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 11:07:11.682462 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:07:11.682581 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 11:07:11.682591 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 11:07:11.682620 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:07:11.682628 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:07:11.682662 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:07:11.682685 6144 factory.go:656] Stopping watch factory\\\\nI1128 11:07:11.682687 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:07:11.682702 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:07:11.682711 6144 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.475834 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.497575 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.515294 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.522673 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.522737 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.522751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.522777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.522791 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:14Z","lastTransitionTime":"2025-11-28T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.538462 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.557643 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.573914 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.585071 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.598800 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.624105 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.627268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.627334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.627355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.627404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.627422 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:14Z","lastTransitionTime":"2025-11-28T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.643609 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.659081 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.676877 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.691780 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.703835 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.724417 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:11Z\\\",\\\"message\\\":\\\".681236 6144 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:07:11.681497 6144 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:07:11.682336 6144 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 11:07:11.682406 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 11:07:11.682411 6144 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 11:07:11.682462 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:07:11.682581 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 11:07:11.682591 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 11:07:11.682620 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:07:11.682628 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:07:11.682662 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:07:11.682685 6144 factory.go:656] Stopping watch factory\\\\nI1128 11:07:11.682687 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:07:11.682702 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:07:11.682711 6144 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:13Z\\\",\\\"message\\\":\\\"0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 11:07:13.273445 6264 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wgsks\\\\nF1128 11:07:13.273773 6264 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:13.273585 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.733349 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.733441 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.733564 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.733623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.734608 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:14Z","lastTransitionTime":"2025-11-28T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.743210 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.758460 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.837908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.837948 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.837958 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.837978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.837992 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:14Z","lastTransitionTime":"2025-11-28T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.942141 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.942207 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.942227 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.942255 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.942275 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:14Z","lastTransitionTime":"2025-11-28T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.994331 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.994341 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:14 crc kubenswrapper[4772]: E1128 11:07:14.994611 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:14 crc kubenswrapper[4772]: I1128 11:07:14.994356 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:14 crc kubenswrapper[4772]: E1128 11:07:14.994729 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:14 crc kubenswrapper[4772]: E1128 11:07:14.994840 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.046600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.046687 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.046711 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.046743 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.046769 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:15Z","lastTransitionTime":"2025-11-28T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.149540 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.149591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.149603 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.149623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.149640 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:15Z","lastTransitionTime":"2025-11-28T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.253579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.253676 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.253713 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.253748 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.253770 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:15Z","lastTransitionTime":"2025-11-28T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.273115 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" event={"ID":"b33ea4b3-c282-4391-8da3-3a499a23bb16","Type":"ContainerStarted","Data":"fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821"} Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.273220 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" event={"ID":"b33ea4b3-c282-4391-8da3-3a499a23bb16","Type":"ContainerStarted","Data":"f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24"} Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.273247 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" event={"ID":"b33ea4b3-c282-4391-8da3-3a499a23bb16","Type":"ContainerStarted","Data":"a833e9302bf05fbbd0101260e933bff67a0026bcb299eaddd8ac95975bff339d"} Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.276767 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/1.log" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.283128 4772 scope.go:117] "RemoveContainer" containerID="f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f" Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.283307 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.314988 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.333274 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.352522 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.357409 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.357465 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.357484 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.357514 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.357533 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:15Z","lastTransitionTime":"2025-11-28T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.372140 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.388526 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.404793 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.422202 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.450869 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3d784b955c9c55d29aa9f10abcb9d363f354e092e5a1345bdd98c8fd62259a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:11Z\\\",\\\"message\\\":\\\".681236 6144 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:07:11.681497 6144 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:07:11.682336 6144 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 11:07:11.682406 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 11:07:11.682411 6144 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 11:07:11.682462 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:07:11.682581 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 11:07:11.682591 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 11:07:11.682620 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:07:11.682628 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:07:11.682662 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:07:11.682685 6144 factory.go:656] Stopping watch factory\\\\nI1128 11:07:11.682687 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:07:11.682702 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:07:11.682711 6144 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:13Z\\\",\\\"message\\\":\\\"0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 11:07:13.273445 6264 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wgsks\\\\nF1128 11:07:13.273773 6264 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:13.273585 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.460574 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.460627 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.460646 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.460678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.460712 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:15Z","lastTransitionTime":"2025-11-28T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.480825 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.502231 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.527318 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.545875 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.564701 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.564761 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.564781 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.564811 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.564828 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:15Z","lastTransitionTime":"2025-11-28T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.566079 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.589881 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.606844 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qstr6"] Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.607851 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.607918 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.607976 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.622113 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.643294 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.659626 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.667253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.667294 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.667303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.667321 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.667331 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:15Z","lastTransitionTime":"2025-11-28T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.675845 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.698453 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.712537 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.726192 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.737134 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.750284 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.761387 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.770137 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.770205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.770220 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.770241 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.770255 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:15Z","lastTransitionTime":"2025-11-28T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.774834 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tftxn\" (UniqueName: \"kubernetes.io/projected/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-kube-api-access-tftxn\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.774879 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.785447 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:13Z\\\",\\\"message\\\":\\\"0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 11:07:13.273445 6264 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wgsks\\\\nF1128 11:07:13.273773 6264 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:13.273585 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.824290 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.840312 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.858615 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.872418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.872458 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.872468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.872488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.872499 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:15Z","lastTransitionTime":"2025-11-28T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.873593 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.875810 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.875914 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.875944 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tftxn\" (UniqueName: \"kubernetes.io/projected/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-kube-api-access-tftxn\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.875963 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.875981 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.876000 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876072 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:07:31.876037533 +0000 UTC m=+50.199280780 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876084 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876118 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876133 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876134 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876097 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876172 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:31.876160416 +0000 UTC m=+50.199403633 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876182 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876187 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876194 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876186 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:31.876180247 +0000 UTC m=+50.199423474 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.876270 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876310 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs podName:def9b3ab-2dc8-4f40-9d6b-346f9cdbc386 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:16.37629602 +0000 UTC m=+34.699539247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs") pod "network-metrics-daemon-qstr6" (UID: "def9b3ab-2dc8-4f40-9d6b-346f9cdbc386") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876322 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:31.87631622 +0000 UTC m=+50.199559447 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876323 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:15 crc kubenswrapper[4772]: E1128 11:07:15.876384 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:31.876375752 +0000 UTC m=+50.199618969 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.890556 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.891991 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tftxn\" (UniqueName: \"kubernetes.io/projected/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-kube-api-access-tftxn\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.904218 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.917951 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.975173 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.975229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.975237 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.975253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:15 crc kubenswrapper[4772]: I1128 11:07:15.975263 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:15Z","lastTransitionTime":"2025-11-28T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.077945 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.078015 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.078028 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.078043 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.078053 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:16Z","lastTransitionTime":"2025-11-28T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.180422 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.180726 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.180830 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.180911 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.180982 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:16Z","lastTransitionTime":"2025-11-28T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.283553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.283615 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.283631 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.283657 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.283676 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:16Z","lastTransitionTime":"2025-11-28T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.380487 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:16 crc kubenswrapper[4772]: E1128 11:07:16.380718 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:16 crc kubenswrapper[4772]: E1128 11:07:16.380812 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs podName:def9b3ab-2dc8-4f40-9d6b-346f9cdbc386 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:17.380788904 +0000 UTC m=+35.704032191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs") pod "network-metrics-daemon-qstr6" (UID: "def9b3ab-2dc8-4f40-9d6b-346f9cdbc386") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.385548 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.385576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.385587 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.385602 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.385613 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:16Z","lastTransitionTime":"2025-11-28T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.488229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.488271 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.488280 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.488295 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.488308 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:16Z","lastTransitionTime":"2025-11-28T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.590625 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.590859 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.590872 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.590889 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.590902 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:16Z","lastTransitionTime":"2025-11-28T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.693344 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.693420 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.693433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.693452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.693467 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:16Z","lastTransitionTime":"2025-11-28T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.797045 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.797108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.797128 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.797153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.797171 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:16Z","lastTransitionTime":"2025-11-28T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.899907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.899966 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.899980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.899998 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.900011 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:16Z","lastTransitionTime":"2025-11-28T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.994319 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.994433 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.994446 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:16 crc kubenswrapper[4772]: I1128 11:07:16.994350 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:16 crc kubenswrapper[4772]: E1128 11:07:16.994549 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:16 crc kubenswrapper[4772]: E1128 11:07:16.994740 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:16 crc kubenswrapper[4772]: E1128 11:07:16.994985 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:16 crc kubenswrapper[4772]: E1128 11:07:16.995173 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.008511 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.008595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.008615 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.008643 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.008663 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:17Z","lastTransitionTime":"2025-11-28T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.111317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.111412 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.111433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.111461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.111481 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:17Z","lastTransitionTime":"2025-11-28T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.214517 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.214589 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.214610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.214637 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.214656 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:17Z","lastTransitionTime":"2025-11-28T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.273066 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.274512 4772 scope.go:117] "RemoveContainer" containerID="f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f" Nov 28 11:07:17 crc kubenswrapper[4772]: E1128 11:07:17.274887 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.316858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.316921 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.316939 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.316963 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.316984 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:17Z","lastTransitionTime":"2025-11-28T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.392283 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:17 crc kubenswrapper[4772]: E1128 11:07:17.392970 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:17 crc kubenswrapper[4772]: E1128 11:07:17.393071 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs podName:def9b3ab-2dc8-4f40-9d6b-346f9cdbc386 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:19.393041619 +0000 UTC m=+37.716284886 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs") pod "network-metrics-daemon-qstr6" (UID: "def9b3ab-2dc8-4f40-9d6b-346f9cdbc386") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.420572 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.420644 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.420671 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.420702 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.420729 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:17Z","lastTransitionTime":"2025-11-28T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.523768 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.523848 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.523874 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.523906 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.523931 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:17Z","lastTransitionTime":"2025-11-28T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.626175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.626240 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.626262 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.626301 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.626340 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:17Z","lastTransitionTime":"2025-11-28T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.728957 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.728992 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.729001 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.729014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.729024 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:17Z","lastTransitionTime":"2025-11-28T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.831964 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.832031 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.832053 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.832083 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.832105 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:17Z","lastTransitionTime":"2025-11-28T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.934224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.934272 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.934310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.934327 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:17 crc kubenswrapper[4772]: I1128 11:07:17.934338 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:17Z","lastTransitionTime":"2025-11-28T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.036594 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.036658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.036675 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.036694 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.036705 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:18Z","lastTransitionTime":"2025-11-28T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.138882 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.138925 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.138938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.138954 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.138964 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:18Z","lastTransitionTime":"2025-11-28T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.241706 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.241739 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.241747 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.241760 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.241768 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:18Z","lastTransitionTime":"2025-11-28T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.344243 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.344283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.344294 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.344309 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.344319 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:18Z","lastTransitionTime":"2025-11-28T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.446647 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.446689 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.446699 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.446714 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.446726 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:18Z","lastTransitionTime":"2025-11-28T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.549638 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.549665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.549673 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.549686 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.549696 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:18Z","lastTransitionTime":"2025-11-28T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.651513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.651559 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.651568 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.651580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.651589 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:18Z","lastTransitionTime":"2025-11-28T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.754551 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.754616 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.754628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.754643 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.754676 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:18Z","lastTransitionTime":"2025-11-28T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.857550 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.857591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.857606 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.857621 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.857631 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:18Z","lastTransitionTime":"2025-11-28T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.960773 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.960833 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.960842 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.960861 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.960870 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:18Z","lastTransitionTime":"2025-11-28T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.994450 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.994519 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.994471 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:18 crc kubenswrapper[4772]: I1128 11:07:18.994469 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:18 crc kubenswrapper[4772]: E1128 11:07:18.994666 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:18 crc kubenswrapper[4772]: E1128 11:07:18.994780 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:18 crc kubenswrapper[4772]: E1128 11:07:18.994859 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:18 crc kubenswrapper[4772]: E1128 11:07:18.994990 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.063715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.063775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.063793 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.063818 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.063838 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:19Z","lastTransitionTime":"2025-11-28T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.166415 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.166485 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.166512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.166541 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.166563 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:19Z","lastTransitionTime":"2025-11-28T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.269658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.269720 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.269732 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.269753 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.269769 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:19Z","lastTransitionTime":"2025-11-28T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.373088 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.373830 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.373968 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.374080 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.374174 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:19Z","lastTransitionTime":"2025-11-28T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.431235 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:19 crc kubenswrapper[4772]: E1128 11:07:19.431804 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:19 crc kubenswrapper[4772]: E1128 11:07:19.432145 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs podName:def9b3ab-2dc8-4f40-9d6b-346f9cdbc386 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:23.432104982 +0000 UTC m=+41.755348249 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs") pod "network-metrics-daemon-qstr6" (UID: "def9b3ab-2dc8-4f40-9d6b-346f9cdbc386") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.476992 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.477047 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.477060 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.477079 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.477092 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:19Z","lastTransitionTime":"2025-11-28T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.580705 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.580746 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.580757 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.580773 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.580785 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:19Z","lastTransitionTime":"2025-11-28T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.684016 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.684119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.684148 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.684181 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.684200 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:19Z","lastTransitionTime":"2025-11-28T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.787058 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.787113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.787128 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.787148 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.787162 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:19Z","lastTransitionTime":"2025-11-28T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.889761 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.889830 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.889847 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.889876 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.889894 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:19Z","lastTransitionTime":"2025-11-28T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.993514 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.993557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.993568 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.993585 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:19 crc kubenswrapper[4772]: I1128 11:07:19.993595 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:19Z","lastTransitionTime":"2025-11-28T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.096737 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.096773 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.096783 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.096805 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.096816 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:20Z","lastTransitionTime":"2025-11-28T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.199732 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.199766 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.199777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.199790 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.199799 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:20Z","lastTransitionTime":"2025-11-28T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.301960 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.302014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.302026 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.302049 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.302064 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:20Z","lastTransitionTime":"2025-11-28T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.405291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.405349 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.405371 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.405386 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.405397 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:20Z","lastTransitionTime":"2025-11-28T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.508153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.508561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.508797 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.508947 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.509071 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:20Z","lastTransitionTime":"2025-11-28T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.613113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.613192 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.613211 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.613239 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.613261 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:20Z","lastTransitionTime":"2025-11-28T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.716709 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.717061 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.717263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.717436 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.717571 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:20Z","lastTransitionTime":"2025-11-28T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.820787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.820850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.820867 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.820894 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.820911 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:20Z","lastTransitionTime":"2025-11-28T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.924306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.924417 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.924435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.924500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.924521 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:20Z","lastTransitionTime":"2025-11-28T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.977737 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.977801 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.977813 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.977830 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.977842 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:20Z","lastTransitionTime":"2025-11-28T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.993749 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:20 crc kubenswrapper[4772]: E1128 11:07:20.994032 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.993813 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:20 crc kubenswrapper[4772]: E1128 11:07:20.994280 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.993878 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:20 crc kubenswrapper[4772]: E1128 11:07:20.994532 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:20 crc kubenswrapper[4772]: I1128 11:07:20.993799 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:20 crc kubenswrapper[4772]: E1128 11:07:20.994994 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:20 crc kubenswrapper[4772]: E1128 11:07:20.996862 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:20Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.003247 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.003300 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.003316 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.003342 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.003376 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:21 crc kubenswrapper[4772]: E1128 11:07:21.015106 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.019068 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.019130 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.019150 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.019174 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.019192 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:21 crc kubenswrapper[4772]: E1128 11:07:21.039721 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.045206 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.045264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.045283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.045310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.045328 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:21 crc kubenswrapper[4772]: E1128 11:07:21.063917 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.068502 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.068571 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.068594 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.068629 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.068653 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:21 crc kubenswrapper[4772]: E1128 11:07:21.089074 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:21 crc kubenswrapper[4772]: E1128 11:07:21.089256 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.091482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.091536 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.091554 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.091577 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.091596 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.194609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.194662 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.194678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.194702 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.194723 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.297192 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.297247 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.297263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.297287 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.297304 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.399819 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.399871 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.399888 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.399912 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.399929 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.503283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.504224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.504486 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.504687 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.504886 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.608626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.608690 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.608710 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.608735 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.608754 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.712076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.712130 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.712149 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.712174 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.712192 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.815046 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.815103 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.815116 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.815136 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.815150 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.918531 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.918802 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.918886 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.918980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:21 crc kubenswrapper[4772]: I1128 11:07:21.919071 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:21Z","lastTransitionTime":"2025-11-28T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.010793 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.022739 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.022798 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.022820 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.022848 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.022871 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:22Z","lastTransitionTime":"2025-11-28T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.027738 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.045103 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.079722 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.097886 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.117739 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.126612 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.126905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.127117 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.127433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.127661 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:22Z","lastTransitionTime":"2025-11-28T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.132314 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.150195 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.163187 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.189792 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:13Z\\\",\\\"message\\\":\\\"0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 11:07:13.273445 6264 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wgsks\\\\nF1128 11:07:13.273773 6264 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:13.273585 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.209145 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.224079 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.230905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.230958 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.230971 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.230996 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.231012 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:22Z","lastTransitionTime":"2025-11-28T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.240315 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.254203 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.273328 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.291331 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.305627 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.333426 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.333506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.333523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.333542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.333555 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:22Z","lastTransitionTime":"2025-11-28T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.437646 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.437699 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.437712 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.437733 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.437750 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:22Z","lastTransitionTime":"2025-11-28T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.542054 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.542132 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.542160 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.542192 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.542217 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:22Z","lastTransitionTime":"2025-11-28T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.644887 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.644928 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.644938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.644951 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.644960 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:22Z","lastTransitionTime":"2025-11-28T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.748433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.748522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.748553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.748590 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.748618 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:22Z","lastTransitionTime":"2025-11-28T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.852051 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.852119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.852141 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.852170 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.852191 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:22Z","lastTransitionTime":"2025-11-28T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.955653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.955759 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.955787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.955823 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.955844 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:22Z","lastTransitionTime":"2025-11-28T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.994003 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.994111 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.994035 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:22 crc kubenswrapper[4772]: E1128 11:07:22.994227 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:22 crc kubenswrapper[4772]: E1128 11:07:22.994447 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:22 crc kubenswrapper[4772]: E1128 11:07:22.994582 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:22 crc kubenswrapper[4772]: I1128 11:07:22.994042 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:22 crc kubenswrapper[4772]: E1128 11:07:22.995172 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.058727 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.058804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.058823 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.058844 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.058862 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:23Z","lastTransitionTime":"2025-11-28T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.161971 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.162025 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.162042 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.162068 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.162086 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:23Z","lastTransitionTime":"2025-11-28T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.265382 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.265437 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.265454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.265480 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.265499 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:23Z","lastTransitionTime":"2025-11-28T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.369422 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.369487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.369508 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.369539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.369556 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:23Z","lastTransitionTime":"2025-11-28T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.472329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.472433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.472456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.472484 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.472507 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:23Z","lastTransitionTime":"2025-11-28T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.474077 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:23 crc kubenswrapper[4772]: E1128 11:07:23.474250 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:23 crc kubenswrapper[4772]: E1128 11:07:23.474386 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs podName:def9b3ab-2dc8-4f40-9d6b-346f9cdbc386 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:31.474323031 +0000 UTC m=+49.797566288 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs") pod "network-metrics-daemon-qstr6" (UID: "def9b3ab-2dc8-4f40-9d6b-346f9cdbc386") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.575184 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.575253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.575277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.575306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.575327 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:23Z","lastTransitionTime":"2025-11-28T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.677768 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.677829 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.677851 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.677910 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.677936 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:23Z","lastTransitionTime":"2025-11-28T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.780782 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.780855 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.780882 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.780913 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.780937 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:23Z","lastTransitionTime":"2025-11-28T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.884546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.884600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.884618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.885045 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.885085 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:23Z","lastTransitionTime":"2025-11-28T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.988744 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.988828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.988854 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.988885 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:23 crc kubenswrapper[4772]: I1128 11:07:23.988905 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:23Z","lastTransitionTime":"2025-11-28T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.092795 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.092856 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.092874 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.092901 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.092920 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:24Z","lastTransitionTime":"2025-11-28T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.196664 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.196726 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.196750 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.196780 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.196805 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:24Z","lastTransitionTime":"2025-11-28T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.300247 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.300324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.300346 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.300420 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.300446 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:24Z","lastTransitionTime":"2025-11-28T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.404132 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.404207 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.404229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.404261 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.404283 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:24Z","lastTransitionTime":"2025-11-28T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.507871 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.507951 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.507978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.508008 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.508034 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:24Z","lastTransitionTime":"2025-11-28T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.610973 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.611051 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.611078 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.611103 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.611121 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:24Z","lastTransitionTime":"2025-11-28T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.713665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.713730 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.713751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.713774 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.713793 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:24Z","lastTransitionTime":"2025-11-28T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.816253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.816298 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.816314 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.816336 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.816351 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:24Z","lastTransitionTime":"2025-11-28T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.919774 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.919829 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.919843 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.919863 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.919878 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:24Z","lastTransitionTime":"2025-11-28T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.994048 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.994203 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:24 crc kubenswrapper[4772]: E1128 11:07:24.994283 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.994350 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:24 crc kubenswrapper[4772]: I1128 11:07:24.994453 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:24 crc kubenswrapper[4772]: E1128 11:07:24.994434 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:24 crc kubenswrapper[4772]: E1128 11:07:24.994560 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:24 crc kubenswrapper[4772]: E1128 11:07:24.994649 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.022716 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.022774 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.022793 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.022822 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.022845 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:25Z","lastTransitionTime":"2025-11-28T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.127088 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.127166 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.127185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.127216 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.127239 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:25Z","lastTransitionTime":"2025-11-28T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.231073 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.231155 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.231185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.231218 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.231237 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:25Z","lastTransitionTime":"2025-11-28T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.334250 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.334314 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.334331 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.334356 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.334412 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:25Z","lastTransitionTime":"2025-11-28T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.437743 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.437797 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.437847 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.437875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.437894 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:25Z","lastTransitionTime":"2025-11-28T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.541511 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.541569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.541586 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.541610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.541628 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:25Z","lastTransitionTime":"2025-11-28T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.645565 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.645631 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.645649 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.645675 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.645742 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:25Z","lastTransitionTime":"2025-11-28T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.748955 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.749017 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.749034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.749061 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.749081 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:25Z","lastTransitionTime":"2025-11-28T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.851766 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.851832 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.851850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.851877 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.851894 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:25Z","lastTransitionTime":"2025-11-28T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.955494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.955562 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.955586 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.955621 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:25 crc kubenswrapper[4772]: I1128 11:07:25.955647 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:25Z","lastTransitionTime":"2025-11-28T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.058495 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.058561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.058579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.058608 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.058626 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:26Z","lastTransitionTime":"2025-11-28T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.163182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.163749 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.163862 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.164615 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.164653 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:26Z","lastTransitionTime":"2025-11-28T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.267087 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.267160 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.267179 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.267205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.267225 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:26Z","lastTransitionTime":"2025-11-28T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.369760 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.370029 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.370116 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.370205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.370286 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:26Z","lastTransitionTime":"2025-11-28T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.474123 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.474194 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.474219 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.474248 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.474268 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:26Z","lastTransitionTime":"2025-11-28T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.578060 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.578612 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.578703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.578798 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.578923 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:26Z","lastTransitionTime":"2025-11-28T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.682747 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.682800 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.682820 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.682850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.682869 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:26Z","lastTransitionTime":"2025-11-28T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.786030 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.786099 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.786118 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.786142 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.786176 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:26Z","lastTransitionTime":"2025-11-28T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.889931 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.889991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.890010 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.890036 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.890071 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:26Z","lastTransitionTime":"2025-11-28T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.993632 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.993778 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.993804 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:26 crc kubenswrapper[4772]: E1128 11:07:26.993849 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.993632 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:26 crc kubenswrapper[4772]: E1128 11:07:26.993982 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:26 crc kubenswrapper[4772]: E1128 11:07:26.994268 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:26 crc kubenswrapper[4772]: E1128 11:07:26.994504 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.994431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.994607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.994694 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.994790 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:26 crc kubenswrapper[4772]: I1128 11:07:26.994867 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:26Z","lastTransitionTime":"2025-11-28T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.098977 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.099048 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.099068 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.099096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.099115 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:27Z","lastTransitionTime":"2025-11-28T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.203785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.203866 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.203884 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.203917 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.203935 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:27Z","lastTransitionTime":"2025-11-28T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.307449 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.307532 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.307557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.307583 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.307603 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:27Z","lastTransitionTime":"2025-11-28T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.410975 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.411043 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.411061 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.411090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.411108 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:27Z","lastTransitionTime":"2025-11-28T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.515239 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.515306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.515325 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.515391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.515420 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:27Z","lastTransitionTime":"2025-11-28T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.618533 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.618668 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.618690 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.618721 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.618739 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:27Z","lastTransitionTime":"2025-11-28T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.721230 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.721306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.721323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.721383 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.721405 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:27Z","lastTransitionTime":"2025-11-28T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.824940 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.825004 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.825021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.825050 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.825068 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:27Z","lastTransitionTime":"2025-11-28T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.928317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.928475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.928502 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.928530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:27 crc kubenswrapper[4772]: I1128 11:07:27.928549 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:27Z","lastTransitionTime":"2025-11-28T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.030715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.030938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.031028 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.031099 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.031164 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:28Z","lastTransitionTime":"2025-11-28T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.133307 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.133331 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.133339 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.133352 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.133375 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:28Z","lastTransitionTime":"2025-11-28T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.236631 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.236695 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.236712 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.236743 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.236762 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:28Z","lastTransitionTime":"2025-11-28T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.286195 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.297150 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.302685 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.316040 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.330884 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.339889 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.339937 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.339946 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.339960 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.340281 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:28Z","lastTransitionTime":"2025-11-28T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.345129 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.358473 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.369341 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.383477 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.414509 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.430584 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.442472 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.442507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.442516 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.442530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.442540 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:28Z","lastTransitionTime":"2025-11-28T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.447068 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.458671 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.468744 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.478699 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.497450 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:13Z\\\",\\\"message\\\":\\\"0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 11:07:13.273445 6264 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wgsks\\\\nF1128 11:07:13.273773 6264 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:13.273585 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.511804 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.523740 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.537661 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.545576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.545610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.545619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.545634 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.545644 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:28Z","lastTransitionTime":"2025-11-28T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.648449 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.648491 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.648504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.648525 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.648538 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:28Z","lastTransitionTime":"2025-11-28T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.750798 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.750867 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.750885 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.750910 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.750928 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:28Z","lastTransitionTime":"2025-11-28T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.853289 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.853392 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.853420 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.853451 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.853472 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:28Z","lastTransitionTime":"2025-11-28T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.956016 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.956061 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.956099 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.956116 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.956128 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:28Z","lastTransitionTime":"2025-11-28T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.994002 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.994054 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.994002 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:28 crc kubenswrapper[4772]: E1128 11:07:28.994159 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.994006 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:28 crc kubenswrapper[4772]: E1128 11:07:28.994518 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:28 crc kubenswrapper[4772]: E1128 11:07:28.994575 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:28 crc kubenswrapper[4772]: E1128 11:07:28.994714 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:28 crc kubenswrapper[4772]: I1128 11:07:28.995439 4772 scope.go:117] "RemoveContainer" containerID="f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.059309 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.059696 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.059715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.059762 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.059780 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:29Z","lastTransitionTime":"2025-11-28T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.163463 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.163505 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.163519 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.163538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.163559 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:29Z","lastTransitionTime":"2025-11-28T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.266229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.266272 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.266288 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.266304 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.266314 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:29Z","lastTransitionTime":"2025-11-28T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.331733 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/1.log" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.334495 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4"} Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.335432 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.347219 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.369033 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.369058 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.369067 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.369079 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.369089 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:29Z","lastTransitionTime":"2025-11-28T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.370325 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.417103 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.434116 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe579c5e-4747-41d8-babd-a7d6142169f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.462545 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.471092 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.471131 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.471140 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.471157 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.471167 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:29Z","lastTransitionTime":"2025-11-28T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.479942 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.491944 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.501226 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.513824 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.525112 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.546574 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:13Z\\\",\\\"message\\\":\\\"0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 11:07:13.273445 6264 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wgsks\\\\nF1128 11:07:13.273773 6264 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:13.273585 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.559943 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.571580 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.573028 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.573056 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.573066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.573082 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.573096 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:29Z","lastTransitionTime":"2025-11-28T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.586304 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.600247 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.612697 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.625434 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.639300 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.675850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.675888 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.675899 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.675914 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.675924 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:29Z","lastTransitionTime":"2025-11-28T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.778593 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.778627 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.778636 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.778655 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.778664 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:29Z","lastTransitionTime":"2025-11-28T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.881166 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.881434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.881521 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.881618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.881701 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:29Z","lastTransitionTime":"2025-11-28T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.983758 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.983807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.983821 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.983842 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:29 crc kubenswrapper[4772]: I1128 11:07:29.983855 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:29Z","lastTransitionTime":"2025-11-28T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.085607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.085630 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.085639 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.085652 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.085661 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:30Z","lastTransitionTime":"2025-11-28T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.188694 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.188770 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.188795 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.188824 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.188849 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:30Z","lastTransitionTime":"2025-11-28T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.291342 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.291425 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.291446 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.291470 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.291488 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:30Z","lastTransitionTime":"2025-11-28T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.341501 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/2.log" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.342377 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/1.log" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.346279 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4" exitCode=1 Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.346320 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4"} Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.346391 4772 scope.go:117] "RemoveContainer" containerID="f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.347603 4772 scope.go:117] "RemoveContainer" containerID="87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4" Nov 28 11:07:30 crc kubenswrapper[4772]: E1128 11:07:30.347873 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.373501 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.388615 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.394334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.394445 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.394470 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.394497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.394515 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:30Z","lastTransitionTime":"2025-11-28T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.403180 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.424129 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.442328 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.460235 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.474042 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.489046 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.496851 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.496893 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.496904 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.496922 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.496934 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:30Z","lastTransitionTime":"2025-11-28T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.505808 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.516994 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.528604 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.540287 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe579c5e-4747-41d8-babd-a7d6142169f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.564122 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.587698 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f570f14ae20624ba49b28c3c34520106181629298ec3c4693f2628df8a38208f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:13Z\\\",\\\"message\\\":\\\"0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 11:07:13.273445 6264 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wgsks\\\\nF1128 11:07:13.273773 6264 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:13Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:13.273585 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:30Z\\\",\\\"message\\\":\\\"583 6477 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1128 11:07:29.904627 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:29.904627 6477 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.600194 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.600250 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.600266 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.600287 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.600306 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:30Z","lastTransitionTime":"2025-11-28T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.605262 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.617930 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.632541 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.642030 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.703658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.703708 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.703719 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.703739 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.703786 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:30Z","lastTransitionTime":"2025-11-28T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.806175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.806221 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.806235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.806252 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.806265 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:30Z","lastTransitionTime":"2025-11-28T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.908937 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.908976 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.908986 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.909004 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.909017 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:30Z","lastTransitionTime":"2025-11-28T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.994009 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.994101 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.994024 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:30 crc kubenswrapper[4772]: I1128 11:07:30.994024 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:30 crc kubenswrapper[4772]: E1128 11:07:30.994256 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:30 crc kubenswrapper[4772]: E1128 11:07:30.994658 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:30 crc kubenswrapper[4772]: E1128 11:07:30.994779 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:30 crc kubenswrapper[4772]: E1128 11:07:30.994866 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.020845 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.020919 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.020942 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.020973 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.020998 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.124102 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.124154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.124170 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.124191 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.124204 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.147081 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.147125 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.147137 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.147161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.147175 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.164286 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.169032 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.169095 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.169113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.169140 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.169159 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.188600 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.194572 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.194641 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.194659 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.194684 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.194737 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.210895 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.215068 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.215134 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.215150 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.215191 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.215206 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.229313 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.235431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.235474 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.235486 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.235504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.235521 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.252156 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.252270 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.254254 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.254318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.254334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.254387 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.254407 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.350803 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/2.log" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.354038 4772 scope.go:117] "RemoveContainer" containerID="87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4" Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.354177 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.356752 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.356784 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.356792 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.356806 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.356818 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.368787 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.381734 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.394393 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.407772 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.418965 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.428764 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.442887 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.456588 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.459333 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.459391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.459403 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.459422 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.459434 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.469133 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.478583 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.490503 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.501880 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe579c5e-4747-41d8-babd-a7d6142169f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.531021 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.550973 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:30Z\\\",\\\"message\\\":\\\"583 6477 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1128 11:07:29.904627 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:29.904627 6477 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.561611 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.561642 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.561651 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.561666 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.561676 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.565250 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.565411 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.565471 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs podName:def9b3ab-2dc8-4f40-9d6b-346f9cdbc386 nodeName:}" failed. No retries permitted until 2025-11-28 11:07:47.565454375 +0000 UTC m=+65.888697602 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs") pod "network-metrics-daemon-qstr6" (UID: "def9b3ab-2dc8-4f40-9d6b-346f9cdbc386") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.573257 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.587068 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.599416 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.609432 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.664985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.665021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.665030 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.665044 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.665055 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.767980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.768017 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.768029 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.768044 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.768054 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.870318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.870410 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.870419 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.870433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.870442 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.968475 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.968578 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.968646 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:03.968618271 +0000 UTC m=+82.291861558 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.968676 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.968690 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.968701 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.968749 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:08:03.968736404 +0000 UTC m=+82.291979631 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.968766 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.968786 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.968811 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.968880 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.968906 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:08:03.968898738 +0000 UTC m=+82.292141965 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.968920 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.968936 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.968941 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.969029 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:08:03.969010931 +0000 UTC m=+82.292254188 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.968948 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:31 crc kubenswrapper[4772]: E1128 11:07:31.969098 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:08:03.969085273 +0000 UTC m=+82.292328540 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.973276 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.973304 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.973313 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.973328 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:31 crc kubenswrapper[4772]: I1128 11:07:31.973337 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:31Z","lastTransitionTime":"2025-11-28T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.007719 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe579c5e-4747-41d8-babd-a7d6142169f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.038027 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.051337 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.069961 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.075154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.075199 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.075215 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.075233 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.075248 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:32Z","lastTransitionTime":"2025-11-28T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.083791 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.094598 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.106631 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.116411 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.136487 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:30Z\\\",\\\"message\\\":\\\"583 6477 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1128 11:07:29.904627 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:29.904627 6477 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.152520 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.165786 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.177967 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.178011 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.178019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.178034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.178043 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:32Z","lastTransitionTime":"2025-11-28T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.181152 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.195391 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.207226 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.219537 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.234087 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.245716 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.255334 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.280724 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.280763 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.280771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.280785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.280795 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:32Z","lastTransitionTime":"2025-11-28T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.383932 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.384323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.384516 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.384652 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.384792 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:32Z","lastTransitionTime":"2025-11-28T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.488327 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.488395 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.488409 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.488428 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.488441 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:32Z","lastTransitionTime":"2025-11-28T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.591028 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.591464 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.591665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.591881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.592081 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:32Z","lastTransitionTime":"2025-11-28T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.694857 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.694905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.694916 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.694932 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.694941 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:32Z","lastTransitionTime":"2025-11-28T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.797991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.798033 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.798042 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.798060 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.798069 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:32Z","lastTransitionTime":"2025-11-28T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.900542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.900622 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.900645 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.900678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.900705 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:32Z","lastTransitionTime":"2025-11-28T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.994194 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:32 crc kubenswrapper[4772]: E1128 11:07:32.994658 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.994418 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:32 crc kubenswrapper[4772]: E1128 11:07:32.994868 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.994307 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:32 crc kubenswrapper[4772]: E1128 11:07:32.995042 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:32 crc kubenswrapper[4772]: I1128 11:07:32.994519 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:32 crc kubenswrapper[4772]: E1128 11:07:32.995224 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.003741 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.003796 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.003813 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.003838 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.003856 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:33Z","lastTransitionTime":"2025-11-28T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.106561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.106627 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.106645 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.106668 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.106685 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:33Z","lastTransitionTime":"2025-11-28T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.209724 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.209764 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.209772 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.209789 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.209800 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:33Z","lastTransitionTime":"2025-11-28T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.312186 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.312388 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.312500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.312669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.312758 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:33Z","lastTransitionTime":"2025-11-28T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.415082 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.415424 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.415517 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.415599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.415678 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:33Z","lastTransitionTime":"2025-11-28T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.517751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.517784 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.517793 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.517805 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.517815 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:33Z","lastTransitionTime":"2025-11-28T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.620430 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.620704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.620791 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.620871 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.620942 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:33Z","lastTransitionTime":"2025-11-28T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.723580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.723634 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.723651 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.723674 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.723692 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:33Z","lastTransitionTime":"2025-11-28T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.826334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.826614 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.826684 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.826754 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.826838 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:33Z","lastTransitionTime":"2025-11-28T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.929575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.929625 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.929646 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.929661 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:33 crc kubenswrapper[4772]: I1128 11:07:33.929673 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:33Z","lastTransitionTime":"2025-11-28T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.031545 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.031871 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.031889 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.031916 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.031936 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:34Z","lastTransitionTime":"2025-11-28T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.134723 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.135050 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.135181 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.135309 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.135503 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:34Z","lastTransitionTime":"2025-11-28T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.238512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.238787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.238882 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.238950 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.239017 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:34Z","lastTransitionTime":"2025-11-28T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.341968 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.342480 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.342670 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.342856 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.343041 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:34Z","lastTransitionTime":"2025-11-28T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.445894 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.446196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.446394 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.446608 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.446883 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:34Z","lastTransitionTime":"2025-11-28T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.549981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.550470 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.550681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.550852 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.550994 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:34Z","lastTransitionTime":"2025-11-28T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.653647 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.653880 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.653968 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.654048 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.654128 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:34Z","lastTransitionTime":"2025-11-28T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.756343 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.756624 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.756763 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.756866 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.756985 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:34Z","lastTransitionTime":"2025-11-28T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.859652 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.859694 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.859703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.859718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.859727 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:34Z","lastTransitionTime":"2025-11-28T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.962097 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.962150 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.962161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.962178 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.962188 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:34Z","lastTransitionTime":"2025-11-28T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.993787 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.993813 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.993868 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:34 crc kubenswrapper[4772]: I1128 11:07:34.994172 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:34 crc kubenswrapper[4772]: E1128 11:07:34.994446 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:34 crc kubenswrapper[4772]: E1128 11:07:34.994698 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:34 crc kubenswrapper[4772]: E1128 11:07:34.994896 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:34 crc kubenswrapper[4772]: E1128 11:07:34.994797 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.064797 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.064846 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.064865 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.064889 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.064908 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:35Z","lastTransitionTime":"2025-11-28T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.167408 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.167714 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.167945 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.168188 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.168467 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:35Z","lastTransitionTime":"2025-11-28T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.270977 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.271016 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.271029 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.271048 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.271061 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:35Z","lastTransitionTime":"2025-11-28T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.373413 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.373503 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.373517 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.373546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.373559 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:35Z","lastTransitionTime":"2025-11-28T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.476856 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.477208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.477303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.477430 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.477544 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:35Z","lastTransitionTime":"2025-11-28T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.581010 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.581054 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.581066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.581084 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.581098 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:35Z","lastTransitionTime":"2025-11-28T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.683954 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.684010 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.684026 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.684049 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.684067 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:35Z","lastTransitionTime":"2025-11-28T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.786816 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.787159 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.787261 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.787352 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.787463 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:35Z","lastTransitionTime":"2025-11-28T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.890342 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.890395 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.890405 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.890419 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.890431 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:35Z","lastTransitionTime":"2025-11-28T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.993851 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.993924 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.993945 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.993996 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:35 crc kubenswrapper[4772]: I1128 11:07:35.994013 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:35Z","lastTransitionTime":"2025-11-28T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.097292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.097384 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.097397 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.097414 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.097430 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:36Z","lastTransitionTime":"2025-11-28T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.200460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.200528 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.200543 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.200562 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.200576 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:36Z","lastTransitionTime":"2025-11-28T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.304066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.304390 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.304467 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.304550 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.304615 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:36Z","lastTransitionTime":"2025-11-28T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.407101 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.407175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.407197 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.407230 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.407252 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:36Z","lastTransitionTime":"2025-11-28T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.511264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.511326 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.511345 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.511411 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.511431 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:36Z","lastTransitionTime":"2025-11-28T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.614972 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.615536 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.615684 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.615823 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.616076 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:36Z","lastTransitionTime":"2025-11-28T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.719512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.722807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.722949 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.723041 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.723138 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:36Z","lastTransitionTime":"2025-11-28T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.826216 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.826256 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.826266 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.826281 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.826293 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:36Z","lastTransitionTime":"2025-11-28T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.929009 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.929058 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.929070 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.929085 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.929096 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:36Z","lastTransitionTime":"2025-11-28T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.993894 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.993960 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:36 crc kubenswrapper[4772]: E1128 11:07:36.994083 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.994169 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:36 crc kubenswrapper[4772]: I1128 11:07:36.994184 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:36 crc kubenswrapper[4772]: E1128 11:07:36.994442 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:36 crc kubenswrapper[4772]: E1128 11:07:36.994598 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:36 crc kubenswrapper[4772]: E1128 11:07:36.994715 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.032788 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.033041 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.033210 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.033340 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.033525 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:37Z","lastTransitionTime":"2025-11-28T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.137493 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.137584 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.137611 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.137651 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.137678 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:37Z","lastTransitionTime":"2025-11-28T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.241058 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.241308 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.241443 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.241581 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.241696 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:37Z","lastTransitionTime":"2025-11-28T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.345017 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.345074 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.345090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.345108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.345120 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:37Z","lastTransitionTime":"2025-11-28T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.448230 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.448648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.448811 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.448982 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.449126 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:37Z","lastTransitionTime":"2025-11-28T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.553725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.553767 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.553776 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.553794 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.553804 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:37Z","lastTransitionTime":"2025-11-28T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.655993 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.656059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.656072 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.656092 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.656106 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:37Z","lastTransitionTime":"2025-11-28T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.760081 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.760174 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.760196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.760227 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.760249 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:37Z","lastTransitionTime":"2025-11-28T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.868528 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.868621 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.868651 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.868700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.868730 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:37Z","lastTransitionTime":"2025-11-28T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.972592 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.972671 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.972701 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.972736 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:37 crc kubenswrapper[4772]: I1128 11:07:37.972760 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:37Z","lastTransitionTime":"2025-11-28T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.075703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.075784 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.075806 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.075829 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.075849 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:38Z","lastTransitionTime":"2025-11-28T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.179445 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.179523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.179546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.179576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.179601 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:38Z","lastTransitionTime":"2025-11-28T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.283035 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.283069 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.283082 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.283100 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.283112 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:38Z","lastTransitionTime":"2025-11-28T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.385441 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.385513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.385529 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.385552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.385568 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:38Z","lastTransitionTime":"2025-11-28T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.488340 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.488402 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.488414 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.488433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.488449 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:38Z","lastTransitionTime":"2025-11-28T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.591694 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.591771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.591789 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.591819 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.591842 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:38Z","lastTransitionTime":"2025-11-28T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.695437 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.695473 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.695484 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.695501 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.695512 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:38Z","lastTransitionTime":"2025-11-28T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.798580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.798646 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.798664 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.798691 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.798733 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:38Z","lastTransitionTime":"2025-11-28T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.902534 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.902601 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.902618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.902640 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.902656 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:38Z","lastTransitionTime":"2025-11-28T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.994129 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.994226 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:38 crc kubenswrapper[4772]: E1128 11:07:38.994303 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.994421 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:38 crc kubenswrapper[4772]: E1128 11:07:38.994502 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:38 crc kubenswrapper[4772]: E1128 11:07:38.994640 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:38 crc kubenswrapper[4772]: I1128 11:07:38.994930 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:38 crc kubenswrapper[4772]: E1128 11:07:38.995240 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.005762 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.005810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.005838 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.005853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.005864 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:39Z","lastTransitionTime":"2025-11-28T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.108827 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.108895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.108913 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.108938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.108957 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:39Z","lastTransitionTime":"2025-11-28T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.211500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.211855 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.211952 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.212054 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.212147 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:39Z","lastTransitionTime":"2025-11-28T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.315563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.315607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.315619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.315635 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.315646 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:39Z","lastTransitionTime":"2025-11-28T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.418582 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.418665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.418681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.418702 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.418718 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:39Z","lastTransitionTime":"2025-11-28T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.521054 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.521090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.521099 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.521114 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.521124 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:39Z","lastTransitionTime":"2025-11-28T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.624178 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.624226 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.624244 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.624267 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.624283 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:39Z","lastTransitionTime":"2025-11-28T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.726891 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.726933 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.726943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.726959 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.726972 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:39Z","lastTransitionTime":"2025-11-28T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.830066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.830118 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.830133 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.830153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.830166 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:39Z","lastTransitionTime":"2025-11-28T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.932586 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.932662 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.932682 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.932707 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:39 crc kubenswrapper[4772]: I1128 11:07:39.932727 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:39Z","lastTransitionTime":"2025-11-28T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.035682 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.035826 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.035852 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.035875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.035893 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:40Z","lastTransitionTime":"2025-11-28T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.138790 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.138833 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.138851 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.138880 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.138895 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:40Z","lastTransitionTime":"2025-11-28T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.241425 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.241467 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.241476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.241492 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.241502 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:40Z","lastTransitionTime":"2025-11-28T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.343592 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.343871 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.343965 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.344052 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.344135 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:40Z","lastTransitionTime":"2025-11-28T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.447099 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.447426 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.447536 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.447623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.447698 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:40Z","lastTransitionTime":"2025-11-28T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.551278 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.551731 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.551957 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.552304 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.552526 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:40Z","lastTransitionTime":"2025-11-28T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.654734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.654782 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.654794 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.654812 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.654825 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:40Z","lastTransitionTime":"2025-11-28T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.756700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.756735 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.756747 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.756763 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.756775 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:40Z","lastTransitionTime":"2025-11-28T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.858601 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.858634 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.858644 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.858658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.858669 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:40Z","lastTransitionTime":"2025-11-28T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.961029 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.961060 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.961071 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.961085 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.961095 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:40Z","lastTransitionTime":"2025-11-28T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.994069 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.994118 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.994172 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:40 crc kubenswrapper[4772]: I1128 11:07:40.994180 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:40 crc kubenswrapper[4772]: E1128 11:07:40.994271 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:40 crc kubenswrapper[4772]: E1128 11:07:40.994425 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:40 crc kubenswrapper[4772]: E1128 11:07:40.994567 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:40 crc kubenswrapper[4772]: E1128 11:07:40.994695 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.064098 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.064170 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.064189 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.064214 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.064230 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.167599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.167672 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.167693 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.167725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.167765 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.271489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.271562 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.271575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.271600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.271625 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.375112 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.375172 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.375189 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.375215 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.375233 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.414817 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.414972 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.415008 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.415042 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.415063 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: E1128 11:07:41.437130 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:41Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.443535 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.443613 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.443637 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.443666 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.443690 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: E1128 11:07:41.466876 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:41Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.477169 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.477211 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.477604 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.477639 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.477656 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: E1128 11:07:41.495789 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:41Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.501259 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.501313 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.501329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.501351 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.501404 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: E1128 11:07:41.522383 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:41Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.526724 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.526777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.526792 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.526814 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.526827 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: E1128 11:07:41.545555 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:41Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:41 crc kubenswrapper[4772]: E1128 11:07:41.545788 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.548471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.548543 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.548562 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.548590 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.548609 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.652612 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.652657 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.652668 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.652686 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.652700 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.755758 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.755797 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.755806 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.755821 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.755831 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.858182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.858239 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.858251 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.858270 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.858283 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.961682 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.961728 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.961741 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.961760 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:41 crc kubenswrapper[4772]: I1128 11:07:41.961773 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:41Z","lastTransitionTime":"2025-11-28T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.020838 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.040282 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.054794 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.065164 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.065217 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.065229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.065249 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.065262 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:42Z","lastTransitionTime":"2025-11-28T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.068818 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.090513 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.110575 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe579c5e-4747-41d8-babd-a7d6142169f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.125349 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.153870 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:30Z\\\",\\\"message\\\":\\\"583 6477 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1128 11:07:29.904627 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:29.904627 6477 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.168163 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.168252 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.168270 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.168305 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.168325 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:42Z","lastTransitionTime":"2025-11-28T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.173723 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.186573 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.200439 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.216014 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.230921 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.243878 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.257689 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.270959 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.271036 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.271056 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.271088 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.271111 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:42Z","lastTransitionTime":"2025-11-28T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.274860 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.292202 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.309014 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.379409 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.379448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.379456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.379472 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.379482 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:42Z","lastTransitionTime":"2025-11-28T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.482100 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.482167 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.482192 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.482222 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.482242 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:42Z","lastTransitionTime":"2025-11-28T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.585058 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.585095 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.585105 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.585119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.585133 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:42Z","lastTransitionTime":"2025-11-28T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.689231 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.689317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.689348 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.689436 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.689463 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:42Z","lastTransitionTime":"2025-11-28T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.793580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.793647 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.793669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.793700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.793721 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:42Z","lastTransitionTime":"2025-11-28T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.897133 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.897178 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.897189 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.897206 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.897216 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:42Z","lastTransitionTime":"2025-11-28T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.994561 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.994559 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:42 crc kubenswrapper[4772]: E1128 11:07:42.994998 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.994625 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:42 crc kubenswrapper[4772]: E1128 11:07:42.995350 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:42 crc kubenswrapper[4772]: I1128 11:07:42.994593 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:42 crc kubenswrapper[4772]: E1128 11:07:42.995141 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:42 crc kubenswrapper[4772]: E1128 11:07:42.995846 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.001136 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.001537 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.001569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.001599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.001618 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:43Z","lastTransitionTime":"2025-11-28T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.104985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.105481 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.105639 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.105780 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.105925 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:43Z","lastTransitionTime":"2025-11-28T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.209091 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.209150 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.209167 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.209191 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.209211 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:43Z","lastTransitionTime":"2025-11-28T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.316666 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.316736 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.316771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.316803 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.316827 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:43Z","lastTransitionTime":"2025-11-28T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.419944 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.420474 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.420556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.420623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.420685 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:43Z","lastTransitionTime":"2025-11-28T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.523194 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.523230 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.523238 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.523253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.523263 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:43Z","lastTransitionTime":"2025-11-28T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.626074 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.626124 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.626138 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.626158 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.626173 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:43Z","lastTransitionTime":"2025-11-28T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.729066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.729101 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.729109 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.729122 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.729131 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:43Z","lastTransitionTime":"2025-11-28T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.835573 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.835612 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.835624 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.835642 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.835654 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:43Z","lastTransitionTime":"2025-11-28T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.938751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.938783 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.938794 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.938809 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:43 crc kubenswrapper[4772]: I1128 11:07:43.938820 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:43Z","lastTransitionTime":"2025-11-28T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.042263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.042316 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.042334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.042391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.042410 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:44Z","lastTransitionTime":"2025-11-28T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.145705 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.145738 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.145752 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.145767 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.145778 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:44Z","lastTransitionTime":"2025-11-28T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.248698 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.249803 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.249904 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.250008 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.250096 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:44Z","lastTransitionTime":"2025-11-28T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.353017 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.353057 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.353082 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.353096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.353116 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:44Z","lastTransitionTime":"2025-11-28T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.455461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.455530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.455557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.455585 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.455609 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:44Z","lastTransitionTime":"2025-11-28T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.558235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.558283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.558292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.558307 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.558316 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:44Z","lastTransitionTime":"2025-11-28T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.661153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.661205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.661218 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.661236 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.661249 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:44Z","lastTransitionTime":"2025-11-28T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.763498 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.763549 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.763563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.763579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.763592 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:44Z","lastTransitionTime":"2025-11-28T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.890309 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.890353 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.890383 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.890401 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.890412 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:44Z","lastTransitionTime":"2025-11-28T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.993297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.993331 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.993340 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.993355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.993387 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:44Z","lastTransitionTime":"2025-11-28T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.993403 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:44 crc kubenswrapper[4772]: E1128 11:07:44.993540 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.993597 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.993618 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:44 crc kubenswrapper[4772]: I1128 11:07:44.993628 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:44 crc kubenswrapper[4772]: E1128 11:07:44.993660 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:44 crc kubenswrapper[4772]: E1128 11:07:44.993701 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:44 crc kubenswrapper[4772]: E1128 11:07:44.993868 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.095474 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.095703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.095800 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.095874 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.095956 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:45Z","lastTransitionTime":"2025-11-28T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.198796 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.198826 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.198837 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.198852 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.198861 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:45Z","lastTransitionTime":"2025-11-28T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.300914 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.301456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.301539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.301613 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.301679 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:45Z","lastTransitionTime":"2025-11-28T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.404432 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.404752 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.404815 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.404891 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.404960 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:45Z","lastTransitionTime":"2025-11-28T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.508087 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.508131 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.508140 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.508158 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.508169 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:45Z","lastTransitionTime":"2025-11-28T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.612346 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.612766 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.612842 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.612925 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.612989 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:45Z","lastTransitionTime":"2025-11-28T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.716568 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.717123 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.717341 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.717532 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.717676 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:45Z","lastTransitionTime":"2025-11-28T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.820393 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.820447 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.820467 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.820491 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.820508 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:45Z","lastTransitionTime":"2025-11-28T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.923154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.923457 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.923664 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.923855 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.923998 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:45Z","lastTransitionTime":"2025-11-28T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:45 crc kubenswrapper[4772]: I1128 11:07:45.995215 4772 scope.go:117] "RemoveContainer" containerID="87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4" Nov 28 11:07:45 crc kubenswrapper[4772]: E1128 11:07:45.996301 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.026749 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.026810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.026828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.026853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.026872 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:46Z","lastTransitionTime":"2025-11-28T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.129403 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.129444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.129455 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.129471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.129484 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:46Z","lastTransitionTime":"2025-11-28T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.231102 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.231154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.231178 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.231193 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.231202 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:46Z","lastTransitionTime":"2025-11-28T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.333327 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.333390 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.333402 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.333418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.333433 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:46Z","lastTransitionTime":"2025-11-28T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.435617 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.435673 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.435685 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.435706 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.435721 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:46Z","lastTransitionTime":"2025-11-28T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.538308 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.538341 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.538351 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.538380 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.538390 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:46Z","lastTransitionTime":"2025-11-28T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.640755 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.640799 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.640811 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.640827 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.640840 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:46Z","lastTransitionTime":"2025-11-28T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.742972 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.743012 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.743025 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.743042 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.743052 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:46Z","lastTransitionTime":"2025-11-28T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.844553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.844592 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.844601 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.844616 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.844626 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:46Z","lastTransitionTime":"2025-11-28T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.946315 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.946399 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.946417 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.946442 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.946459 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:46Z","lastTransitionTime":"2025-11-28T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.993924 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.993989 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:46 crc kubenswrapper[4772]: E1128 11:07:46.994041 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.994081 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:46 crc kubenswrapper[4772]: I1128 11:07:46.994123 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:46 crc kubenswrapper[4772]: E1128 11:07:46.994238 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:46 crc kubenswrapper[4772]: E1128 11:07:46.994481 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:46 crc kubenswrapper[4772]: E1128 11:07:46.994544 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.049319 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.049416 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.049437 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.049461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.049479 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:47Z","lastTransitionTime":"2025-11-28T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.152585 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.152622 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.152632 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.152647 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.152658 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:47Z","lastTransitionTime":"2025-11-28T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.255262 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.255295 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.255306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.255324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.255335 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:47Z","lastTransitionTime":"2025-11-28T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.357913 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.357958 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.357969 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.357983 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.357992 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:47Z","lastTransitionTime":"2025-11-28T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.461153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.461181 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.461192 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.461205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.461215 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:47Z","lastTransitionTime":"2025-11-28T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.564461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.564563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.564583 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.564609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.564627 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:47Z","lastTransitionTime":"2025-11-28T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:47 crc kubenswrapper[4772]: E1128 11:07:47.631824 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:47 crc kubenswrapper[4772]: E1128 11:07:47.632260 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs podName:def9b3ab-2dc8-4f40-9d6b-346f9cdbc386 nodeName:}" failed. No retries permitted until 2025-11-28 11:08:19.632235132 +0000 UTC m=+97.955478399 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs") pod "network-metrics-daemon-qstr6" (UID: "def9b3ab-2dc8-4f40-9d6b-346f9cdbc386") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.631665 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.667040 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.667075 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.667085 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.667098 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.667109 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:47Z","lastTransitionTime":"2025-11-28T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.770247 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.770470 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.770488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.770506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.770520 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:47Z","lastTransitionTime":"2025-11-28T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.873220 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.873276 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.873291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.873315 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.873333 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:47Z","lastTransitionTime":"2025-11-28T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.976128 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.976414 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.976500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.976575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:47 crc kubenswrapper[4772]: I1128 11:07:47.976651 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:47Z","lastTransitionTime":"2025-11-28T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.079425 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.079466 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.079500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.079518 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.079531 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:48Z","lastTransitionTime":"2025-11-28T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.183333 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.183394 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.183405 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.183423 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.183434 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:48Z","lastTransitionTime":"2025-11-28T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.286225 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.286533 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.286556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.286591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.286618 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:48Z","lastTransitionTime":"2025-11-28T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.388960 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.389902 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.390059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.390204 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.390394 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:48Z","lastTransitionTime":"2025-11-28T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.408328 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qsnnj_a4e5807b-7c14-477e-af8b-1260b997ff17/kube-multus/0.log" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.408402 4772 generic.go:334] "Generic (PLEG): container finished" podID="a4e5807b-7c14-477e-af8b-1260b997ff17" containerID="a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b" exitCode=1 Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.408437 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qsnnj" event={"ID":"a4e5807b-7c14-477e-af8b-1260b997ff17","Type":"ContainerDied","Data":"a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b"} Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.408813 4772 scope.go:117] "RemoveContainer" containerID="a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.422852 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.436734 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.461219 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.474435 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.490676 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.492512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.492572 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.492580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.492597 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.492630 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:48Z","lastTransitionTime":"2025-11-28T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.504418 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.516839 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.530651 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe579c5e-4747-41d8-babd-a7d6142169f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.541441 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.559460 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:30Z\\\",\\\"message\\\":\\\"583 6477 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1128 11:07:29.904627 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:29.904627 6477 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.574268 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.585253 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.595786 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.595819 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.595829 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.595844 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.595854 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:48Z","lastTransitionTime":"2025-11-28T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.599792 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.615972 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.632990 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.648351 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.662690 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:47Z\\\",\\\"message\\\":\\\"2025-11-28T11:07:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c\\\\n2025-11-28T11:07:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c to /host/opt/cni/bin/\\\\n2025-11-28T11:07:02Z [verbose] multus-daemon started\\\\n2025-11-28T11:07:02Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:07:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.676077 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.697988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.698037 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.698046 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.698066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.698077 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:48Z","lastTransitionTime":"2025-11-28T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.801110 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.801154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.801163 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.801179 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.801192 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:48Z","lastTransitionTime":"2025-11-28T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.903836 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.903884 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.903899 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.903921 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.903940 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:48Z","lastTransitionTime":"2025-11-28T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.994020 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.994040 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.994091 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:48 crc kubenswrapper[4772]: E1128 11:07:48.994645 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:48 crc kubenswrapper[4772]: E1128 11:07:48.994449 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:48 crc kubenswrapper[4772]: I1128 11:07:48.994127 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:48 crc kubenswrapper[4772]: E1128 11:07:48.994703 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:48 crc kubenswrapper[4772]: E1128 11:07:48.994835 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.006907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.006972 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.006989 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.007016 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.007041 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:49Z","lastTransitionTime":"2025-11-28T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.110093 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.110129 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.110138 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.110153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.110163 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:49Z","lastTransitionTime":"2025-11-28T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.212170 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.212232 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.212250 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.212275 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.212292 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:49Z","lastTransitionTime":"2025-11-28T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.314419 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.314799 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.315025 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.315226 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.315472 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:49Z","lastTransitionTime":"2025-11-28T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.412810 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qsnnj_a4e5807b-7c14-477e-af8b-1260b997ff17/kube-multus/0.log" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.412877 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qsnnj" event={"ID":"a4e5807b-7c14-477e-af8b-1260b997ff17","Type":"ContainerStarted","Data":"125d71e6561215a264909d21c3847fb2269b14c5933345eee667243e9cbf3a4a"} Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.417777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.417843 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.417865 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.417893 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.417913 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:49Z","lastTransitionTime":"2025-11-28T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.433111 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.454555 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.467516 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.486241 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:30Z\\\",\\\"message\\\":\\\"583 6477 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1128 11:07:29.904627 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:29.904627 6477 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.502762 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.522076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.522163 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.522226 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.522260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.522286 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:49Z","lastTransitionTime":"2025-11-28T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.523885 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.541076 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d71e6561215a264909d21c3847fb2269b14c5933345eee667243e9cbf3a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:47Z\\\",\\\"message\\\":\\\"2025-11-28T11:07:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c\\\\n2025-11-28T11:07:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c to /host/opt/cni/bin/\\\\n2025-11-28T11:07:02Z [verbose] multus-daemon started\\\\n2025-11-28T11:07:02Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:07:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.556991 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.577472 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.593201 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.608142 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.620843 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.625222 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.625261 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.625276 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.625295 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.625307 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:49Z","lastTransitionTime":"2025-11-28T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.632108 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.643225 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.656260 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe579c5e-4747-41d8-babd-a7d6142169f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.680157 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.695340 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.710260 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.727581 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.727622 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.727633 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.727650 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.727662 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:49Z","lastTransitionTime":"2025-11-28T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.829872 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.829906 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.829949 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.829968 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.829978 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:49Z","lastTransitionTime":"2025-11-28T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.932922 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.932980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.933028 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.933055 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:49 crc kubenswrapper[4772]: I1128 11:07:49.933127 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:49Z","lastTransitionTime":"2025-11-28T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.036439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.036479 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.036488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.036501 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.036511 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:50Z","lastTransitionTime":"2025-11-28T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.140048 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.140098 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.140108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.140129 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.140141 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:50Z","lastTransitionTime":"2025-11-28T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.243506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.243549 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.243562 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.243582 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.243595 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:50Z","lastTransitionTime":"2025-11-28T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.346981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.347024 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.347036 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.347054 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.347067 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:50Z","lastTransitionTime":"2025-11-28T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.450572 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.450678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.450761 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.450805 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.450836 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:50Z","lastTransitionTime":"2025-11-28T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.554122 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.554164 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.554175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.554192 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.554205 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:50Z","lastTransitionTime":"2025-11-28T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.656990 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.657034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.657051 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.657073 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.657090 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:50Z","lastTransitionTime":"2025-11-28T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.759939 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.759990 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.760002 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.760022 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.760036 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:50Z","lastTransitionTime":"2025-11-28T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.862804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.862854 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.862869 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.862887 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.862901 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:50Z","lastTransitionTime":"2025-11-28T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.965753 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.965824 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.965837 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.965865 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.965878 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:50Z","lastTransitionTime":"2025-11-28T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.994376 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.994421 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.994437 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:50 crc kubenswrapper[4772]: I1128 11:07:50.994484 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:50 crc kubenswrapper[4772]: E1128 11:07:50.994598 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:50 crc kubenswrapper[4772]: E1128 11:07:50.994678 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:50 crc kubenswrapper[4772]: E1128 11:07:50.994780 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:50 crc kubenswrapper[4772]: E1128 11:07:50.994900 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.069107 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.069170 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.069223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.069252 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.069273 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.171183 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.171225 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.171235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.171251 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.171261 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.273081 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.273150 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.273175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.273199 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.273217 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.374853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.374900 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.374916 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.374935 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.374951 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.476508 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.476540 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.476548 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.476563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.476573 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.578653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.578691 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.578701 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.578715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.578726 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.680906 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.680941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.680949 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.680964 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.680976 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.708600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.708642 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.708657 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.708674 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.708687 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: E1128 11:07:51.721226 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:51Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.725081 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.725111 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.725121 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.725136 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.725147 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: E1128 11:07:51.737124 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:51Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.740162 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.740197 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.740210 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.740228 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.740241 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: E1128 11:07:51.750408 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:51Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.753080 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.753108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.753120 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.753134 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.753147 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: E1128 11:07:51.764124 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:51Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.767658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.767693 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.767704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.767719 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.767733 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: E1128 11:07:51.779622 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:51Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:51 crc kubenswrapper[4772]: E1128 11:07:51.779802 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.783234 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.783264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.783273 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.783286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.783296 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.885704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.885740 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.885751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.885765 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.885776 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.987807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.987863 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.987886 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.987916 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:51 crc kubenswrapper[4772]: I1128 11:07:51.987937 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:51Z","lastTransitionTime":"2025-11-28T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.009164 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.021228 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.032056 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.043697 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe579c5e-4747-41d8-babd-a7d6142169f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.061443 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.076063 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.089460 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.090707 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.090777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.090790 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.090939 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.090966 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:52Z","lastTransitionTime":"2025-11-28T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.100938 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.113198 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.123329 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.147765 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:30Z\\\",\\\"message\\\":\\\"583 6477 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1128 11:07:29.904627 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:29.904627 6477 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.164240 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.177420 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.192131 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d71e6561215a264909d21c3847fb2269b14c5933345eee667243e9cbf3a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:47Z\\\",\\\"message\\\":\\\"2025-11-28T11:07:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c\\\\n2025-11-28T11:07:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c to /host/opt/cni/bin/\\\\n2025-11-28T11:07:02Z [verbose] multus-daemon started\\\\n2025-11-28T11:07:02Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:07:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.193482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.193512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.193525 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.193541 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.193553 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:52Z","lastTransitionTime":"2025-11-28T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.207998 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.222638 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.237058 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.248502 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.295835 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.295882 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.295895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.295912 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.295925 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:52Z","lastTransitionTime":"2025-11-28T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.398102 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.398149 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.398162 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.398180 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.398194 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:52Z","lastTransitionTime":"2025-11-28T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.499894 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.499936 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.499950 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.499965 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.499977 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:52Z","lastTransitionTime":"2025-11-28T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.602626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.602703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.602728 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.602761 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.602785 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:52Z","lastTransitionTime":"2025-11-28T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.705409 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.705460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.705473 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.705489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.705524 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:52Z","lastTransitionTime":"2025-11-28T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.808306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.808333 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.808341 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.808354 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.808390 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:52Z","lastTransitionTime":"2025-11-28T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.911012 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.911037 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.911045 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.911057 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.911067 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:52Z","lastTransitionTime":"2025-11-28T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.993780 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.993868 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.993870 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:52 crc kubenswrapper[4772]: E1128 11:07:52.993927 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:52 crc kubenswrapper[4772]: E1128 11:07:52.993990 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:52 crc kubenswrapper[4772]: I1128 11:07:52.994046 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:52 crc kubenswrapper[4772]: E1128 11:07:52.994201 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:52 crc kubenswrapper[4772]: E1128 11:07:52.994312 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.013469 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.013494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.013504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.013516 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.013525 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:53Z","lastTransitionTime":"2025-11-28T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.115797 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.115857 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.115868 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.115889 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.115902 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:53Z","lastTransitionTime":"2025-11-28T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.218176 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.218214 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.218223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.218236 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.218247 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:53Z","lastTransitionTime":"2025-11-28T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.320471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.320502 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.320510 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.320523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.320532 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:53Z","lastTransitionTime":"2025-11-28T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.423946 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.423985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.423994 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.424007 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.424015 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:53Z","lastTransitionTime":"2025-11-28T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.526507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.526583 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.526599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.526623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.526636 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:53Z","lastTransitionTime":"2025-11-28T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.628718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.628761 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.628775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.628794 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.628805 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:53Z","lastTransitionTime":"2025-11-28T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.731275 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.731315 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.731328 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.731347 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.731404 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:53Z","lastTransitionTime":"2025-11-28T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.834289 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.834345 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.834383 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.834410 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.834442 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:53Z","lastTransitionTime":"2025-11-28T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.937034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.937087 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.937107 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.937139 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:53 crc kubenswrapper[4772]: I1128 11:07:53.937159 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:53Z","lastTransitionTime":"2025-11-28T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.041303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.041378 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.041392 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.041414 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.041426 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:54Z","lastTransitionTime":"2025-11-28T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.145323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.145405 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.145420 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.145440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.145453 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:54Z","lastTransitionTime":"2025-11-28T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.249044 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.249106 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.249119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.249142 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.249154 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:54Z","lastTransitionTime":"2025-11-28T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.351448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.351481 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.351491 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.351506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.351515 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:54Z","lastTransitionTime":"2025-11-28T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.458821 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.458847 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.458855 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.458869 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.458878 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:54Z","lastTransitionTime":"2025-11-28T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.562272 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.562324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.562338 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.562395 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.562413 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:54Z","lastTransitionTime":"2025-11-28T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.665056 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.665135 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.665157 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.665188 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.665209 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:54Z","lastTransitionTime":"2025-11-28T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.767959 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.768018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.768042 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.768071 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.768139 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:54Z","lastTransitionTime":"2025-11-28T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.870890 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.870934 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.870946 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.870963 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.870975 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:54Z","lastTransitionTime":"2025-11-28T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.974127 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.974185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.974203 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.974229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.974247 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:54Z","lastTransitionTime":"2025-11-28T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.993790 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.993861 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.993881 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:54 crc kubenswrapper[4772]: I1128 11:07:54.993868 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:54 crc kubenswrapper[4772]: E1128 11:07:54.993983 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:54 crc kubenswrapper[4772]: E1128 11:07:54.994109 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:54 crc kubenswrapper[4772]: E1128 11:07:54.994327 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:54 crc kubenswrapper[4772]: E1128 11:07:54.994567 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.077309 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.077428 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.077448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.077472 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.077489 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:55Z","lastTransitionTime":"2025-11-28T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.181251 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.181305 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.181320 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.181344 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.181388 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:55Z","lastTransitionTime":"2025-11-28T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.285135 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.285208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.285231 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.285249 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.285263 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:55Z","lastTransitionTime":"2025-11-28T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.388434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.388566 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.388588 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.388621 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.388644 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:55Z","lastTransitionTime":"2025-11-28T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.491764 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.491804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.491813 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.491855 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.491876 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:55Z","lastTransitionTime":"2025-11-28T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.595570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.595626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.595647 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.595670 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.595685 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:55Z","lastTransitionTime":"2025-11-28T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.699674 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.699768 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.699791 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.699820 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.699841 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:55Z","lastTransitionTime":"2025-11-28T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.804261 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.804392 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.804423 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.804465 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.804493 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:55Z","lastTransitionTime":"2025-11-28T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.909414 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.909521 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.909544 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.909584 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:55 crc kubenswrapper[4772]: I1128 11:07:55.909608 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:55Z","lastTransitionTime":"2025-11-28T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.012455 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.012549 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.012576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.012616 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.012641 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:56Z","lastTransitionTime":"2025-11-28T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.117471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.117522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.117539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.117566 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.117584 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:56Z","lastTransitionTime":"2025-11-28T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.220479 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.220514 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.220523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.220540 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.220552 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:56Z","lastTransitionTime":"2025-11-28T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.323586 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.324020 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.324214 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.324402 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.324573 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:56Z","lastTransitionTime":"2025-11-28T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.427033 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.427390 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.427524 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.427610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.427767 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:56Z","lastTransitionTime":"2025-11-28T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.531100 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.531155 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.531172 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.531196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.531213 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:56Z","lastTransitionTime":"2025-11-28T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.634668 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.635807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.635959 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.636115 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.636286 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:56Z","lastTransitionTime":"2025-11-28T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.739707 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.739769 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.739783 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.739804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.739817 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:56Z","lastTransitionTime":"2025-11-28T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.842611 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.842683 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.842705 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.842734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.842760 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:56Z","lastTransitionTime":"2025-11-28T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.945994 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.946059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.946090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.946122 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.946144 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:56Z","lastTransitionTime":"2025-11-28T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.994062 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:56 crc kubenswrapper[4772]: E1128 11:07:56.994209 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.994237 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.994264 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:56 crc kubenswrapper[4772]: I1128 11:07:56.994082 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:56 crc kubenswrapper[4772]: E1128 11:07:56.994410 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:56 crc kubenswrapper[4772]: E1128 11:07:56.994495 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:56 crc kubenswrapper[4772]: E1128 11:07:56.994581 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.049200 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.049250 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.049261 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.049278 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.049288 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:57Z","lastTransitionTime":"2025-11-28T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.151577 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.151651 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.151664 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.151684 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.151721 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:57Z","lastTransitionTime":"2025-11-28T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.254477 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.254513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.254522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.254535 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.254545 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:57Z","lastTransitionTime":"2025-11-28T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.357525 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.357612 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.357643 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.357675 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.357701 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:57Z","lastTransitionTime":"2025-11-28T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.460206 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.460265 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.460283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.460306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.460324 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:57Z","lastTransitionTime":"2025-11-28T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.563257 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.563286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.563297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.563311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.563320 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:57Z","lastTransitionTime":"2025-11-28T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.665982 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.666023 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.666032 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.666047 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.666056 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:57Z","lastTransitionTime":"2025-11-28T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.768144 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.768173 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.768181 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.768195 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.768205 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:57Z","lastTransitionTime":"2025-11-28T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.870509 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.870539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.870548 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.870561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.870570 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:57Z","lastTransitionTime":"2025-11-28T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.973907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.973980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.973994 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.974014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:57 crc kubenswrapper[4772]: I1128 11:07:57.974027 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:57Z","lastTransitionTime":"2025-11-28T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.076294 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.076402 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.076420 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.076444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.076462 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:58Z","lastTransitionTime":"2025-11-28T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.179628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.179696 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.179718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.179745 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.179768 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:58Z","lastTransitionTime":"2025-11-28T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.282487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.282590 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.282613 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.282655 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.282680 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:58Z","lastTransitionTime":"2025-11-28T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.385298 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.385339 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.385349 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.385381 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.385391 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:58Z","lastTransitionTime":"2025-11-28T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.489693 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.489748 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.489765 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.489792 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.489812 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:58Z","lastTransitionTime":"2025-11-28T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.592913 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.592990 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.593014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.593046 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.593072 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:58Z","lastTransitionTime":"2025-11-28T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.696500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.696577 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.696591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.696634 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.696646 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:58Z","lastTransitionTime":"2025-11-28T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.799682 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.799724 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.799734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.799750 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.799762 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:58Z","lastTransitionTime":"2025-11-28T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.902522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.902634 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.902660 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.902689 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.902711 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:58Z","lastTransitionTime":"2025-11-28T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.994186 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.994253 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.994337 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:07:58 crc kubenswrapper[4772]: E1128 11:07:58.994428 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.994718 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:07:58 crc kubenswrapper[4772]: E1128 11:07:58.994823 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:07:58 crc kubenswrapper[4772]: E1128 11:07:58.994708 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:07:58 crc kubenswrapper[4772]: E1128 11:07:58.994913 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:07:58 crc kubenswrapper[4772]: I1128 11:07:58.995864 4772 scope.go:117] "RemoveContainer" containerID="87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.005731 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.005861 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.005890 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.005975 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.006020 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:59Z","lastTransitionTime":"2025-11-28T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.108790 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.108842 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.108859 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.108885 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.108903 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:59Z","lastTransitionTime":"2025-11-28T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.210850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.210891 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.210901 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.210916 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.210926 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:59Z","lastTransitionTime":"2025-11-28T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.313856 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.313904 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.313913 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.313927 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.313937 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:59Z","lastTransitionTime":"2025-11-28T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.416084 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.416139 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.416154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.416174 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.416190 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:59Z","lastTransitionTime":"2025-11-28T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.444598 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/2.log" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.519209 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.519269 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.519286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.519311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.519334 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:59Z","lastTransitionTime":"2025-11-28T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.621984 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.622033 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.622049 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.622068 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.622080 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:59Z","lastTransitionTime":"2025-11-28T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.724847 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.725190 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.725201 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.725218 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.725233 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:59Z","lastTransitionTime":"2025-11-28T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.827513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.827553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.827565 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.827582 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.827595 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:59Z","lastTransitionTime":"2025-11-28T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.929700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.929730 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.929741 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.929755 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:07:59 crc kubenswrapper[4772]: I1128 11:07:59.929765 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:07:59Z","lastTransitionTime":"2025-11-28T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.031689 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.031731 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.031742 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.031757 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.031769 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:00Z","lastTransitionTime":"2025-11-28T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.133888 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.133960 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.133976 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.133999 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.134016 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:00Z","lastTransitionTime":"2025-11-28T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.237248 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.237287 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.237297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.237312 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.237323 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:00Z","lastTransitionTime":"2025-11-28T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.340228 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.340268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.340277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.340295 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.340308 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:00Z","lastTransitionTime":"2025-11-28T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.442824 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.442870 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.442881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.442902 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.442916 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:00Z","lastTransitionTime":"2025-11-28T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.458817 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/2.log" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.466886 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f"} Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.467400 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.482708 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.496755 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.514909 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:30Z\\\",\\\"message\\\":\\\"583 6477 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1128 11:07:29.904627 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:29.904627 6477 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.530144 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.546116 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.546202 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.546220 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.546274 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.546293 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:00Z","lastTransitionTime":"2025-11-28T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.550996 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.571230 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d71e6561215a264909d21c3847fb2269b14c5933345eee667243e9cbf3a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:47Z\\\",\\\"message\\\":\\\"2025-11-28T11:07:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c\\\\n2025-11-28T11:07:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c to /host/opt/cni/bin/\\\\n2025-11-28T11:07:02Z [verbose] multus-daemon started\\\\n2025-11-28T11:07:02Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:07:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.586676 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.601594 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.626016 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.647398 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.648440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.648478 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.648487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.648501 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.648510 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:00Z","lastTransitionTime":"2025-11-28T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.664864 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.676342 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.687063 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.698422 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe579c5e-4747-41d8-babd-a7d6142169f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.715869 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.727788 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.739816 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.750978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.751029 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.751047 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.751076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.751089 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:00Z","lastTransitionTime":"2025-11-28T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.752301 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:00Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.853389 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.853436 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.853449 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.853468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.853480 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:00Z","lastTransitionTime":"2025-11-28T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.955869 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.955908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.955917 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.955932 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.955942 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:00Z","lastTransitionTime":"2025-11-28T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.993910 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.993989 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:00 crc kubenswrapper[4772]: E1128 11:08:00.994020 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.994068 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:00 crc kubenswrapper[4772]: E1128 11:08:00.994167 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:00 crc kubenswrapper[4772]: E1128 11:08:00.994262 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:00 crc kubenswrapper[4772]: I1128 11:08:00.994478 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:00 crc kubenswrapper[4772]: E1128 11:08:00.994703 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.058273 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.058313 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.058323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.058340 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.058350 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:01Z","lastTransitionTime":"2025-11-28T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.160505 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.160550 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.160563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.160580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.160592 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:01Z","lastTransitionTime":"2025-11-28T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.262497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.262559 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.262578 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.262603 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.262618 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:01Z","lastTransitionTime":"2025-11-28T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.364078 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.364116 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.364127 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.364141 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.364152 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:01Z","lastTransitionTime":"2025-11-28T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.467203 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.467241 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.467252 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.467267 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.467278 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:01Z","lastTransitionTime":"2025-11-28T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.472022 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/3.log" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.472929 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/2.log" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.477693 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f" exitCode=1 Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.477751 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f"} Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.477795 4772 scope.go:117] "RemoveContainer" containerID="87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.479803 4772 scope.go:117] "RemoveContainer" containerID="1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f" Nov 28 11:08:01 crc kubenswrapper[4772]: E1128 11:08:01.480028 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.510342 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.525751 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.538644 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.552034 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.563218 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.569579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.569679 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.569697 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.569719 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.569736 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:01Z","lastTransitionTime":"2025-11-28T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.575147 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe579c5e-4747-41d8-babd-a7d6142169f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.584797 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.607118 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:30Z\\\",\\\"message\\\":\\\"583 6477 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1128 11:07:29.904627 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:29.904627 6477 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:08:00Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:08:00.257502 6847 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:08:00.257557 6847 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:08:00.257705 6847 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:08:00.257937 6847 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:08:00.257978 6847 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:08:00.265752 6847 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1128 11:08:00.265783 6847 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1128 11:08:00.265850 6847 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:08:00.265888 6847 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 11:08:00.265985 6847 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.623107 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.632310 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.643873 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.656729 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.672293 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.673659 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.673700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.673717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.673741 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.673757 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:01Z","lastTransitionTime":"2025-11-28T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.684143 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.696055 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d71e6561215a264909d21c3847fb2269b14c5933345eee667243e9cbf3a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:47Z\\\",\\\"message\\\":\\\"2025-11-28T11:07:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c\\\\n2025-11-28T11:07:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c to /host/opt/cni/bin/\\\\n2025-11-28T11:07:02Z [verbose] multus-daemon started\\\\n2025-11-28T11:07:02Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:07:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.709137 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.720148 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.730788 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:01Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.776157 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.776186 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.776198 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.776214 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.776225 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:01Z","lastTransitionTime":"2025-11-28T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.880260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.880304 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.880315 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.880332 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.880343 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:01Z","lastTransitionTime":"2025-11-28T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.983219 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.983297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.983316 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.983350 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:01 crc kubenswrapper[4772]: I1128 11:08:01.983403 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:01Z","lastTransitionTime":"2025-11-28T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.013109 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.038107 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.047332 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.047450 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.047476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.047509 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.047534 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.057177 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: E1128 11:08:02.068729 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.073659 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.073720 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.073736 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.073759 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.073804 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.078885 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: E1128 11:08:02.101295 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.106297 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d71e6561215a264909d21c3847fb2269b14c5933345eee667243e9cbf3a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:47Z\\\",\\\"message\\\":\\\"2025-11-28T11:07:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c\\\\n2025-11-28T11:07:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c to /host/opt/cni/bin/\\\\n2025-11-28T11:07:02Z [verbose] multus-daemon started\\\\n2025-11-28T11:07:02Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:07:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.108189 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.108215 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.108229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.108247 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.108261 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: E1128 11:08:02.122327 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.126668 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.128297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.128417 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.128432 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.128473 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.128489 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.139459 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: E1128 11:08:02.144303 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.147967 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.147989 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.147998 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.148013 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.148023 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.156947 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe579c5e-4747-41d8-babd-a7d6142169f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: E1128 11:08:02.164624 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a05b4f4f-c83a-40e9-9c28-0f224668a04f\\\",\\\"systemUUID\\\":\\\"d5dbe66f-ecb2-4d9e-91ef-b89ae5bbbd55\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: E1128 11:08:02.164837 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.166426 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.166452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.166463 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.166480 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.166493 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.188388 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.206764 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.221126 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.235001 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.248794 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.260010 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.269219 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.269264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.269276 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.269293 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.269305 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.270030 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.287309 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87b31edc17906a5301e3cb131b7c7703b8e302e8d460709f2e3ce3f5ca66b2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:30Z\\\",\\\"message\\\":\\\"583 6477 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1128 11:07:29.904627 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:07:29Z is after 2025-08-24T17:21:41Z]\\\\nI1128 11:07:29.904627 6477 services_controller.go:451] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:08:00Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:08:00.257502 6847 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:08:00.257557 6847 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:08:00.257705 6847 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:08:00.257937 6847 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:08:00.257978 6847 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:08:00.265752 6847 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1128 11:08:00.265783 6847 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1128 11:08:00.265850 6847 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:08:00.265888 6847 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 11:08:00.265985 6847 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.300346 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.308874 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.371031 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.371082 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.371094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.371112 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.371125 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.473859 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.473916 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.473926 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.473941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.473949 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.482100 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/3.log" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.485373 4772 scope.go:117] "RemoveContainer" containerID="1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f" Nov 28 11:08:02 crc kubenswrapper[4772]: E1128 11:08:02.485520 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.499887 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b9a4079ea00c546a545d843f7e37e6776cd62acb95b36d77753919f0bdb3bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.512572 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.521706 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mstcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f0ff770-c5af-4fea-a576-9bdceb785c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9c0123c19eec6e704269e9b23592d7b6451756b8969d32536a6539f910757b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7jx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mstcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.535082 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4e32c1-8c60-4972-ae38-a20020b374fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f735ed1089334e1e5bc5f785b84003974bec04d301b102655e436bd1c5bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22265\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfsjk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.550970 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe579c5e-4747-41d8-babd-a7d6142169f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c22fe366a9ad1de0b215b9a9583ae3cb0a683107919c53de47fa2d49acc799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c20035e17d1144a41abbfa7f960dd7a68e1e4ce70ef574dcfedaebb00ce96d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aebc818d09cc998a9cde1f0342e0cb5cf4da00f41a7a6710631f27ada2d58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d133368d73072aef58b488dfc90d4692f323f648f528bc3adbd5af121b8c2759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.575536 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8235c2c-702a-4608-ae15-7a8e466e8e99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25c46028a714ee50308f9a744b7ab329c7c10a64d369c7aedc973b8a6efb1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c31b206efb7204b0c16ba42f4799d063765a72905094bad99fd2f6fd13852e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6700b1a4685b398893cae81f60db4afd74ecdf8e75b6d0c90a5fadb7f4e466ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a52708980faa0516e869fbf7885bfd54072cc2b4ed2e3a35f17e36db2d67a650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915f5e46be1b9c8eafebc89d0929bc19b4a5c7ab502a74441c680af728d756d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd761873e80d2ae9553487b41525edeb7e5291301fde26f1fd6243b86957bc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed98317dc87538f04115d0cf64e981ce4116266eacf91e03a18f0d6a5200230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f998bd761bc3ecac837e510641a64d12ee78a30764b6fe3d1a7fe0f6b0e5f605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.576655 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.576694 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.576705 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.576721 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.576953 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.599503 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:08:00Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:08:00.257502 6847 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:08:00.257557 6847 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:08:00.257705 6847 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:08:00.257937 6847 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:08:00.257978 6847 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:08:00.265752 6847 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1128 11:08:00.265783 6847 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1128 11:08:00.265850 6847 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:08:00.265888 6847 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 11:08:00.265985 6847 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j87wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7vdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.623032 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23af5070-24a6-4bab-a4d4-48539af4f256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3677a7efed402fa1a66e5132213a37997b9884b338530dbaaac970388908c1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79c9fde7f0db18e46d1d83f8aa77b667e73248b01bedd312e6207aac4b1157d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bd2e116bcef56e77a05a6c0f0f5e145fa6c3e7083501955a8e4a9a4c3467e6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43c2a25444e8593716bb6863b8b4542b0ba5fd58bedd44cb0295ac05ffffc9f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e44a94fb6cb44918c2ce0e5a28d042e586537950ab9dcd95f42bb5c031bb734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cb94f45c4786e230109f458bddf082e65267a6df82d069346d16ee304c5aac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71108e9a9a5570d91d20260bde357c5189fa73ce63dabfea9e5c65fa6880bc83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8kmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xhnbl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.640298 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qstr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tftxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qstr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.658589 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.673741 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wgsks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8042d2-30a3-4b7a-8a7a-e6e4603b04d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537597887f63d20a9af5e1615384dccff884d398c9e7039ae10e5c4d1d834831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wgsks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.680227 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.680270 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.680286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.680309 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.680325 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.697151 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17176782-0587-4f15-a744-3cb248f523e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:07:00Z\\\",\\\"message\\\":\\\"128 11:07:00.192195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:07:00.192215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192219 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:07:00.192224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:07:00.192227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:07:00.192231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:07:00.192233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:07:00.192242 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1128 11:07:00.195042 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195072 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 11:07:00.195112 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195124 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 11:07:00.195141 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 11:07:00.195148 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 11:07:00.195212 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1128 11:07:00.195232 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.714940 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.729631 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab291008a9652f2208c2c33e6035a0a59b93668e3359d51fd8732b7cced11785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab7f1c8efdc111e0c5313e210f7c88e94dd8b667b0839c6501ea0da9645e5da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.749729 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qsnnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4e5807b-7c14-477e-af8b-1260b997ff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d71e6561215a264909d21c3847fb2269b14c5933345eee667243e9cbf3a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:07:47Z\\\",\\\"message\\\":\\\"2025-11-28T11:07:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c\\\\n2025-11-28T11:07:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3cb9219d-4ef9-4254-bdd1-7f19bc46022c to /host/opt/cni/bin/\\\\n2025-11-28T11:07:02Z [verbose] multus-daemon started\\\\n2025-11-28T11:07:02Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:07:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:07:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjn9n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qsnnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.766424 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6dc1489-0a4e-4460-8277-767899445f92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fe65e53df1dbdcfab9c4279f50388488feb8e568869c9c739ce3528481933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9397dd9b33de8b65b98efff7058170cc237f0d9096334c1a1c559df599d72461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ea3ebd98f76e1cfd5a7013fab349d59556a3b3d2de76b4ba06bff3bf0dc4ce5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:06:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.779909 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebeb521a725b482de6762dd86d2ca33f4b3e138a82be043241dde6cfe8a44852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.782852 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.782905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.782964 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.782990 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.783008 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.796674 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b33ea4b3-c282-4391-8da3-3a499a23bb16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2544ecdca2b92fc120ed86e2cdaadf6f3bf549fb1a4ec03753e28a46f3c1a24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fca12075e15d87834bbf58abf5a546ed1a32e594df0ff724d695e3f456170821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsmg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:07:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jrkks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:08:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.886622 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.886701 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.886720 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.886751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.886774 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.989483 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.989563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.989626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.989696 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.989739 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:02Z","lastTransitionTime":"2025-11-28T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.994704 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.994741 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.994794 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:02 crc kubenswrapper[4772]: I1128 11:08:02.994749 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:02 crc kubenswrapper[4772]: E1128 11:08:02.994878 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:02 crc kubenswrapper[4772]: E1128 11:08:02.995007 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:02 crc kubenswrapper[4772]: E1128 11:08:02.995168 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:02 crc kubenswrapper[4772]: E1128 11:08:02.995245 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.092908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.092989 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.093013 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.093045 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.093111 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:03Z","lastTransitionTime":"2025-11-28T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.196927 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.196989 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.197010 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.197067 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.197086 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:03Z","lastTransitionTime":"2025-11-28T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.299996 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.300053 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.300069 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.300092 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.300109 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:03Z","lastTransitionTime":"2025-11-28T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.402589 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.402639 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.402652 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.402670 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.402683 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:03Z","lastTransitionTime":"2025-11-28T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.504875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.504961 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.504974 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.504992 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.505005 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:03Z","lastTransitionTime":"2025-11-28T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.608160 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.608227 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.608251 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.608281 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.608304 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:03Z","lastTransitionTime":"2025-11-28T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.710874 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.710916 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.710928 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.710943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.710956 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:03Z","lastTransitionTime":"2025-11-28T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.813595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.813640 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.813654 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.813669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.813681 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:03Z","lastTransitionTime":"2025-11-28T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.916659 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.916724 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.916743 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.916767 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:03 crc kubenswrapper[4772]: I1128 11:08:03.916825 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:03Z","lastTransitionTime":"2025-11-28T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.007920 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.008096 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008141 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:08.008092894 +0000 UTC m=+146.331336161 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.008259 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008282 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008309 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008334 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008454 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.008304 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008477 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008573 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008485 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008537 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:09:08.008506095 +0000 UTC m=+146.331749362 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.008680 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008733 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:09:08.008716021 +0000 UTC m=+146.331959258 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008778 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:09:08.008769572 +0000 UTC m=+146.332012809 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008775 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.008870 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:09:08.008852775 +0000 UTC m=+146.332096042 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.019825 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.019886 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.019903 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.019928 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.019947 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:04Z","lastTransitionTime":"2025-11-28T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.122323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.122427 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.122444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.122468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.122485 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:04Z","lastTransitionTime":"2025-11-28T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.225555 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.225626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.225648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.225677 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.225700 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:04Z","lastTransitionTime":"2025-11-28T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.329484 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.329561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.329576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.329599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.329618 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:04Z","lastTransitionTime":"2025-11-28T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.433148 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.433196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.433213 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.433236 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.433253 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:04Z","lastTransitionTime":"2025-11-28T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.536517 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.536597 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.536609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.536628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.536641 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:04Z","lastTransitionTime":"2025-11-28T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.640256 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.640298 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.640310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.640349 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.640403 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:04Z","lastTransitionTime":"2025-11-28T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.743066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.743144 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.743161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.743178 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.743191 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:04Z","lastTransitionTime":"2025-11-28T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.846839 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.846918 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.846939 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.846972 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.846995 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:04Z","lastTransitionTime":"2025-11-28T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.949274 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.949310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.949320 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.949335 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.949346 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:04Z","lastTransitionTime":"2025-11-28T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.993590 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.993712 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.993775 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.993835 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.993882 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.993934 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:04 crc kubenswrapper[4772]: I1128 11:08:04.993972 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:04 crc kubenswrapper[4772]: E1128 11:08:04.994018 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.052553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.052623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.052635 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.052654 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.052666 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:05Z","lastTransitionTime":"2025-11-28T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.156620 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.156688 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.156706 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.156736 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.156753 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:05Z","lastTransitionTime":"2025-11-28T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.260354 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.260398 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.260407 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.260448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.260505 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:05Z","lastTransitionTime":"2025-11-28T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.364534 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.364853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.364865 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.364887 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.364902 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:05Z","lastTransitionTime":"2025-11-28T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.468578 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.468655 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.468675 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.468707 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.468730 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:05Z","lastTransitionTime":"2025-11-28T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.571964 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.572037 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.572055 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.572084 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.572103 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:05Z","lastTransitionTime":"2025-11-28T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.675398 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.675474 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.675492 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.675521 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.675560 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:05Z","lastTransitionTime":"2025-11-28T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.778467 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.778538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.778556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.778582 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.778600 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:05Z","lastTransitionTime":"2025-11-28T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.881861 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.881923 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.881941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.881967 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.881986 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:05Z","lastTransitionTime":"2025-11-28T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.984570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.984644 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.984661 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.984686 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:05 crc kubenswrapper[4772]: I1128 11:08:05.984705 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:05Z","lastTransitionTime":"2025-11-28T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.087621 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.087787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.087806 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.087844 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.087864 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:06Z","lastTransitionTime":"2025-11-28T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.190959 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.191056 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.191090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.191120 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.191142 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:06Z","lastTransitionTime":"2025-11-28T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.294617 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.294677 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.294704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.294727 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.294743 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:06Z","lastTransitionTime":"2025-11-28T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.397500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.397559 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.397576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.397603 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.397620 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:06Z","lastTransitionTime":"2025-11-28T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.500213 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.500277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.500294 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.500318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.500335 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:06Z","lastTransitionTime":"2025-11-28T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.604661 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.604716 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.604733 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.604756 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.604773 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:06Z","lastTransitionTime":"2025-11-28T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.707821 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.707934 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.707951 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.707978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.707998 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:06Z","lastTransitionTime":"2025-11-28T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.811314 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.811505 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.811539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.811570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.811597 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:06Z","lastTransitionTime":"2025-11-28T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.915318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.915430 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.915454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.915485 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.915508 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:06Z","lastTransitionTime":"2025-11-28T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.994436 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.994463 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.994505 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:06 crc kubenswrapper[4772]: E1128 11:08:06.994619 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:06 crc kubenswrapper[4772]: I1128 11:08:06.994665 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:06 crc kubenswrapper[4772]: E1128 11:08:06.994834 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:06 crc kubenswrapper[4772]: E1128 11:08:06.994945 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:06 crc kubenswrapper[4772]: E1128 11:08:06.995100 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.018434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.018475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.018485 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.018499 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.018510 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:07Z","lastTransitionTime":"2025-11-28T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.122410 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.122485 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.122505 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.122530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.122549 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:07Z","lastTransitionTime":"2025-11-28T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.225840 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.225888 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.225896 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.225913 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.225922 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:07Z","lastTransitionTime":"2025-11-28T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.328974 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.329019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.329033 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.329052 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.329065 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:07Z","lastTransitionTime":"2025-11-28T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.438181 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.438224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.438239 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.438258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.438272 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:07Z","lastTransitionTime":"2025-11-28T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.540666 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.540703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.540740 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.540756 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.540766 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:07Z","lastTransitionTime":"2025-11-28T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.644391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.644481 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.644514 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.644546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.644596 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:07Z","lastTransitionTime":"2025-11-28T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.747036 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.747094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.747118 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.747149 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.747173 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:07Z","lastTransitionTime":"2025-11-28T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.850200 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.850233 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.850242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.850257 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.850269 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:07Z","lastTransitionTime":"2025-11-28T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.953071 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.953120 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.953131 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.953147 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:07 crc kubenswrapper[4772]: I1128 11:08:07.953160 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:07Z","lastTransitionTime":"2025-11-28T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.008658 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.055845 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.055874 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.055882 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.055893 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.055902 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:08Z","lastTransitionTime":"2025-11-28T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.159030 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.159081 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.159099 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.159123 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.159144 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:08Z","lastTransitionTime":"2025-11-28T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.261711 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.261758 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.261770 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.261792 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.261806 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:08Z","lastTransitionTime":"2025-11-28T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.365301 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.365846 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.365894 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.365922 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.365946 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:08Z","lastTransitionTime":"2025-11-28T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.469214 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.469272 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.469287 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.469306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.469319 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:08Z","lastTransitionTime":"2025-11-28T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.571863 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.571905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.571913 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.571928 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.571938 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:08Z","lastTransitionTime":"2025-11-28T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.674677 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.674755 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.674772 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.674795 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.674813 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:08Z","lastTransitionTime":"2025-11-28T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.780009 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.780098 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.780119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.780149 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.780175 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:08Z","lastTransitionTime":"2025-11-28T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.883348 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.883426 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.883442 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.883484 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.883527 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:08Z","lastTransitionTime":"2025-11-28T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.986732 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.986803 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.986827 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.986858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.986884 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:08Z","lastTransitionTime":"2025-11-28T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.994334 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.994442 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.994388 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:08 crc kubenswrapper[4772]: E1128 11:08:08.994658 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:08 crc kubenswrapper[4772]: I1128 11:08:08.994756 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:08 crc kubenswrapper[4772]: E1128 11:08:08.994965 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:08 crc kubenswrapper[4772]: E1128 11:08:08.995140 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:08 crc kubenswrapper[4772]: E1128 11:08:08.995241 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.090565 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.090651 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.090676 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.090709 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.090730 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:09Z","lastTransitionTime":"2025-11-28T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.193393 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.193442 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.193459 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.193482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.193500 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:09Z","lastTransitionTime":"2025-11-28T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.296847 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.296878 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.296889 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.296905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.296918 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:09Z","lastTransitionTime":"2025-11-28T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.400506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.400570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.400582 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.400607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.400622 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:09Z","lastTransitionTime":"2025-11-28T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.504414 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.504517 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.504537 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.504574 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.504594 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:09Z","lastTransitionTime":"2025-11-28T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.608008 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.608055 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.608067 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.608086 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.608099 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:09Z","lastTransitionTime":"2025-11-28T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.711425 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.711473 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.711485 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.711506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.711519 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:09Z","lastTransitionTime":"2025-11-28T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.814846 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.814902 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.814915 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.814931 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.815244 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:09Z","lastTransitionTime":"2025-11-28T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.918623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.918701 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.918725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.918755 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:09 crc kubenswrapper[4772]: I1128 11:08:09.918774 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:09Z","lastTransitionTime":"2025-11-28T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.022593 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.022669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.022690 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.022716 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.022738 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:10Z","lastTransitionTime":"2025-11-28T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.126567 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.126628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.126645 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.126672 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.126690 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:10Z","lastTransitionTime":"2025-11-28T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.229024 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.229062 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.229073 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.229087 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.229099 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:10Z","lastTransitionTime":"2025-11-28T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.331442 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.331481 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.331489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.331502 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.331511 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:10Z","lastTransitionTime":"2025-11-28T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.434540 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.434660 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.434687 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.434716 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.434738 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:10Z","lastTransitionTime":"2025-11-28T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.538245 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.538318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.538336 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.538385 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.538405 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:10Z","lastTransitionTime":"2025-11-28T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.641674 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.641730 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.641750 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.641777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.641800 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:10Z","lastTransitionTime":"2025-11-28T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.745461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.745526 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.745542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.745561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.745575 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:10Z","lastTransitionTime":"2025-11-28T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.849094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.849142 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.849154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.849169 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.849180 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:10Z","lastTransitionTime":"2025-11-28T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.952405 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.952494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.952512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.952548 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.952575 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:10Z","lastTransitionTime":"2025-11-28T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.993478 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.993541 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.993605 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:10 crc kubenswrapper[4772]: I1128 11:08:10.993476 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:10 crc kubenswrapper[4772]: E1128 11:08:10.993691 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:10 crc kubenswrapper[4772]: E1128 11:08:10.993800 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:10 crc kubenswrapper[4772]: E1128 11:08:10.993976 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:10 crc kubenswrapper[4772]: E1128 11:08:10.994206 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.056552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.056661 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.056701 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.056732 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.056756 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:11Z","lastTransitionTime":"2025-11-28T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.161322 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.161425 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.161442 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.161468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.161488 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:11Z","lastTransitionTime":"2025-11-28T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.264256 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.264329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.264349 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.264418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.264441 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:11Z","lastTransitionTime":"2025-11-28T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.369054 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.369135 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.369154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.369182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.369201 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:11Z","lastTransitionTime":"2025-11-28T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.473014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.473085 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.473104 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.473134 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.473157 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:11Z","lastTransitionTime":"2025-11-28T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.577122 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.577171 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.577182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.577203 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.577216 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:11Z","lastTransitionTime":"2025-11-28T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.680320 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.680418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.680441 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.680468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.680487 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:11Z","lastTransitionTime":"2025-11-28T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.784081 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.784415 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.784439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.784471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.784504 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:11Z","lastTransitionTime":"2025-11-28T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.889066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.889142 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.889161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.889187 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.889208 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:11Z","lastTransitionTime":"2025-11-28T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.993059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.993138 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.993158 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.993191 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:11 crc kubenswrapper[4772]: I1128 11:08:11.993213 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:11Z","lastTransitionTime":"2025-11-28T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.049486 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jrkks" podStartSLOduration=71.049446182 podStartE2EDuration="1m11.049446182s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:12.04934129 +0000 UTC m=+90.372584557" watchObservedRunningTime="2025-11-28 11:08:12.049446182 +0000 UTC m=+90.372689449" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.074649 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=44.074598237 podStartE2EDuration="44.074598237s" podCreationTimestamp="2025-11-28 11:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:12.074303099 +0000 UTC m=+90.397546376" watchObservedRunningTime="2025-11-28 11:08:12.074598237 +0000 UTC m=+90.397841504" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.096647 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.096707 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.096729 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.096757 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.096781 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:12Z","lastTransitionTime":"2025-11-28T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.148086 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=73.148056766 podStartE2EDuration="1m13.148056766s" podCreationTimestamp="2025-11-28 11:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:12.119945522 +0000 UTC m=+90.443188829" watchObservedRunningTime="2025-11-28 11:08:12.148056766 +0000 UTC m=+90.471300003" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.198946 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.198985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.198996 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.199011 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.199022 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:12Z","lastTransitionTime":"2025-11-28T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.205231 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mstcz" podStartSLOduration=72.205218298 podStartE2EDuration="1m12.205218298s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:12.186580639 +0000 UTC m=+90.509823886" watchObservedRunningTime="2025-11-28 11:08:12.205218298 +0000 UTC m=+90.528461515" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.205520 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podStartSLOduration=72.205516606 podStartE2EDuration="1m12.205516606s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:12.204801557 +0000 UTC m=+90.528044814" watchObservedRunningTime="2025-11-28 11:08:12.205516606 +0000 UTC m=+90.528759833" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.232663 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wgsks" podStartSLOduration=72.232631713 podStartE2EDuration="1m12.232631713s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:12.232120099 +0000 UTC m=+90.555363336" watchObservedRunningTime="2025-11-28 11:08:12.232631713 +0000 UTC m=+90.555874970" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.262619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.262678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.262695 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.262717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.262734 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:08:12Z","lastTransitionTime":"2025-11-28T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.307266 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xhnbl" podStartSLOduration=72.307238373 podStartE2EDuration="1m12.307238373s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:12.286442806 +0000 UTC m=+90.609686043" watchObservedRunningTime="2025-11-28 11:08:12.307238373 +0000 UTC m=+90.630481610" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.347416 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.347384809 podStartE2EDuration="1m5.347384809s" podCreationTimestamp="2025-11-28 11:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:12.332812379 +0000 UTC m=+90.656055606" watchObservedRunningTime="2025-11-28 11:08:12.347384809 +0000 UTC m=+90.670628046" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.347992 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.347984695 podStartE2EDuration="4.347984695s" podCreationTimestamp="2025-11-28 11:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:12.346614419 +0000 UTC m=+90.669857656" watchObservedRunningTime="2025-11-28 11:08:12.347984695 +0000 UTC m=+90.671227932" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.350112 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n"] Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.350689 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.352043 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.353170 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.353270 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.354868 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.394269 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.394240845 podStartE2EDuration="1m11.394240845s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:12.370127709 +0000 UTC m=+90.693370956" watchObservedRunningTime="2025-11-28 11:08:12.394240845 +0000 UTC m=+90.717484082" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.433214 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qsnnj" podStartSLOduration=72.433185619 podStartE2EDuration="1m12.433185619s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:12.432275425 +0000 UTC m=+90.755518662" watchObservedRunningTime="2025-11-28 11:08:12.433185619 +0000 UTC m=+90.756428886" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.512743 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.513088 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.513252 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.513392 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.513516 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.614879 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.615296 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.615549 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.615739 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.615092 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.616156 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.615915 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.616861 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.629084 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.634121 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69a3bdab-9d2a-41d1-9987-2f66ab8214f3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kr48n\" (UID: \"69a3bdab-9d2a-41d1-9987-2f66ab8214f3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.669038 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.994232 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.994479 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.994509 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:12 crc kubenswrapper[4772]: E1128 11:08:12.994718 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:12 crc kubenswrapper[4772]: E1128 11:08:12.994800 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:12 crc kubenswrapper[4772]: E1128 11:08:12.994850 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:12 crc kubenswrapper[4772]: I1128 11:08:12.996053 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:12 crc kubenswrapper[4772]: E1128 11:08:12.996823 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:13 crc kubenswrapper[4772]: I1128 11:08:13.522753 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" event={"ID":"69a3bdab-9d2a-41d1-9987-2f66ab8214f3","Type":"ContainerStarted","Data":"d85d9dfc7820d41165142be0a6408aace1c0d52f2acd654ad1c95205f8c95a2d"} Nov 28 11:08:13 crc kubenswrapper[4772]: I1128 11:08:13.522811 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" event={"ID":"69a3bdab-9d2a-41d1-9987-2f66ab8214f3","Type":"ContainerStarted","Data":"1a3f121cbf2569c45d315ab148fde3670c5fde8a7b0a2c237a9c831bb334434e"} Nov 28 11:08:13 crc kubenswrapper[4772]: I1128 11:08:13.551204 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kr48n" podStartSLOduration=73.551165259 podStartE2EDuration="1m13.551165259s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:13.549809833 +0000 UTC m=+91.873053060" watchObservedRunningTime="2025-11-28 11:08:13.551165259 +0000 UTC m=+91.874408526" Nov 28 11:08:14 crc kubenswrapper[4772]: I1128 11:08:14.994219 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:14 crc kubenswrapper[4772]: I1128 11:08:14.994251 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:14 crc kubenswrapper[4772]: I1128 11:08:14.994351 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:14 crc kubenswrapper[4772]: I1128 11:08:14.994459 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:14 crc kubenswrapper[4772]: E1128 11:08:14.994457 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:14 crc kubenswrapper[4772]: E1128 11:08:14.994550 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:14 crc kubenswrapper[4772]: E1128 11:08:14.994791 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:14 crc kubenswrapper[4772]: E1128 11:08:14.994918 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:16 crc kubenswrapper[4772]: I1128 11:08:16.993452 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:16 crc kubenswrapper[4772]: E1128 11:08:16.993649 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:16 crc kubenswrapper[4772]: I1128 11:08:16.993896 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:16 crc kubenswrapper[4772]: E1128 11:08:16.993991 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:16 crc kubenswrapper[4772]: I1128 11:08:16.994909 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:16 crc kubenswrapper[4772]: E1128 11:08:16.995094 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:16 crc kubenswrapper[4772]: I1128 11:08:16.995205 4772 scope.go:117] "RemoveContainer" containerID="1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f" Nov 28 11:08:16 crc kubenswrapper[4772]: I1128 11:08:16.995320 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:16 crc kubenswrapper[4772]: E1128 11:08:16.995407 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:16 crc kubenswrapper[4772]: E1128 11:08:16.995430 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" Nov 28 11:08:18 crc kubenswrapper[4772]: I1128 11:08:18.993710 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:18 crc kubenswrapper[4772]: I1128 11:08:18.993745 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:18 crc kubenswrapper[4772]: I1128 11:08:18.993837 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:18 crc kubenswrapper[4772]: I1128 11:08:18.994067 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:18 crc kubenswrapper[4772]: E1128 11:08:18.994029 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:18 crc kubenswrapper[4772]: E1128 11:08:18.994244 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:18 crc kubenswrapper[4772]: E1128 11:08:18.994428 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:18 crc kubenswrapper[4772]: E1128 11:08:18.994503 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:19 crc kubenswrapper[4772]: I1128 11:08:19.698139 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:19 crc kubenswrapper[4772]: E1128 11:08:19.698322 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:08:19 crc kubenswrapper[4772]: E1128 11:08:19.698455 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs podName:def9b3ab-2dc8-4f40-9d6b-346f9cdbc386 nodeName:}" failed. No retries permitted until 2025-11-28 11:09:23.698431342 +0000 UTC m=+162.021674599 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs") pod "network-metrics-daemon-qstr6" (UID: "def9b3ab-2dc8-4f40-9d6b-346f9cdbc386") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:08:20 crc kubenswrapper[4772]: I1128 11:08:20.993869 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:20 crc kubenswrapper[4772]: I1128 11:08:20.993936 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:20 crc kubenswrapper[4772]: I1128 11:08:20.993978 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:20 crc kubenswrapper[4772]: E1128 11:08:20.994070 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:20 crc kubenswrapper[4772]: I1128 11:08:20.994103 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:20 crc kubenswrapper[4772]: E1128 11:08:20.994281 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:20 crc kubenswrapper[4772]: E1128 11:08:20.994599 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:20 crc kubenswrapper[4772]: E1128 11:08:20.994683 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:22 crc kubenswrapper[4772]: I1128 11:08:22.994393 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:22 crc kubenswrapper[4772]: I1128 11:08:22.994394 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:22 crc kubenswrapper[4772]: I1128 11:08:22.994469 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:22 crc kubenswrapper[4772]: I1128 11:08:22.994601 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:22 crc kubenswrapper[4772]: E1128 11:08:22.994997 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:22 crc kubenswrapper[4772]: E1128 11:08:22.995116 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:22 crc kubenswrapper[4772]: E1128 11:08:22.995229 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:22 crc kubenswrapper[4772]: E1128 11:08:22.995356 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:24 crc kubenswrapper[4772]: I1128 11:08:24.993971 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:24 crc kubenswrapper[4772]: I1128 11:08:24.994000 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:24 crc kubenswrapper[4772]: E1128 11:08:24.994230 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:24 crc kubenswrapper[4772]: I1128 11:08:24.994259 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:24 crc kubenswrapper[4772]: I1128 11:08:24.994337 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:24 crc kubenswrapper[4772]: E1128 11:08:24.994507 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:24 crc kubenswrapper[4772]: E1128 11:08:24.994641 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:24 crc kubenswrapper[4772]: E1128 11:08:24.994727 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:26 crc kubenswrapper[4772]: I1128 11:08:26.994118 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:26 crc kubenswrapper[4772]: I1128 11:08:26.994113 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:26 crc kubenswrapper[4772]: E1128 11:08:26.994552 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:26 crc kubenswrapper[4772]: I1128 11:08:26.994277 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:26 crc kubenswrapper[4772]: I1128 11:08:26.994113 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:26 crc kubenswrapper[4772]: E1128 11:08:26.994683 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:26 crc kubenswrapper[4772]: E1128 11:08:26.994745 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:26 crc kubenswrapper[4772]: E1128 11:08:26.994819 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:28 crc kubenswrapper[4772]: I1128 11:08:28.993724 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:28 crc kubenswrapper[4772]: I1128 11:08:28.993727 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:28 crc kubenswrapper[4772]: I1128 11:08:28.993756 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:28 crc kubenswrapper[4772]: I1128 11:08:28.993771 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:28 crc kubenswrapper[4772]: E1128 11:08:28.994136 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:28 crc kubenswrapper[4772]: E1128 11:08:28.994271 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:28 crc kubenswrapper[4772]: E1128 11:08:28.994352 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:28 crc kubenswrapper[4772]: E1128 11:08:28.994468 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:28 crc kubenswrapper[4772]: I1128 11:08:28.994508 4772 scope.go:117] "RemoveContainer" containerID="1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f" Nov 28 11:08:28 crc kubenswrapper[4772]: E1128 11:08:28.994715 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b7vdn_openshift-ovn-kubernetes(52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" Nov 28 11:08:30 crc kubenswrapper[4772]: I1128 11:08:30.994377 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:30 crc kubenswrapper[4772]: I1128 11:08:30.994418 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:30 crc kubenswrapper[4772]: I1128 11:08:30.994465 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:30 crc kubenswrapper[4772]: I1128 11:08:30.994383 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:30 crc kubenswrapper[4772]: E1128 11:08:30.994588 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:30 crc kubenswrapper[4772]: E1128 11:08:30.994515 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:30 crc kubenswrapper[4772]: E1128 11:08:30.994755 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:30 crc kubenswrapper[4772]: E1128 11:08:30.994877 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:32 crc kubenswrapper[4772]: I1128 11:08:32.994766 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:32 crc kubenswrapper[4772]: E1128 11:08:32.996050 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:32 crc kubenswrapper[4772]: I1128 11:08:32.994780 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:32 crc kubenswrapper[4772]: E1128 11:08:32.996877 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:32 crc kubenswrapper[4772]: I1128 11:08:32.994833 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:32 crc kubenswrapper[4772]: E1128 11:08:32.997552 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:32 crc kubenswrapper[4772]: I1128 11:08:32.994783 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:32 crc kubenswrapper[4772]: E1128 11:08:32.998015 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:34 crc kubenswrapper[4772]: I1128 11:08:34.591981 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qsnnj_a4e5807b-7c14-477e-af8b-1260b997ff17/kube-multus/1.log" Nov 28 11:08:34 crc kubenswrapper[4772]: I1128 11:08:34.592742 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qsnnj_a4e5807b-7c14-477e-af8b-1260b997ff17/kube-multus/0.log" Nov 28 11:08:34 crc kubenswrapper[4772]: I1128 11:08:34.592793 4772 generic.go:334] "Generic (PLEG): container finished" podID="a4e5807b-7c14-477e-af8b-1260b997ff17" containerID="125d71e6561215a264909d21c3847fb2269b14c5933345eee667243e9cbf3a4a" exitCode=1 Nov 28 11:08:34 crc kubenswrapper[4772]: I1128 11:08:34.592828 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qsnnj" event={"ID":"a4e5807b-7c14-477e-af8b-1260b997ff17","Type":"ContainerDied","Data":"125d71e6561215a264909d21c3847fb2269b14c5933345eee667243e9cbf3a4a"} Nov 28 11:08:34 crc kubenswrapper[4772]: I1128 11:08:34.592867 4772 scope.go:117] "RemoveContainer" containerID="a0462b55d6630570bf50c6ab26ec7bc53712e79c8ffeae17b4a705f950d0816b" Nov 28 11:08:34 crc kubenswrapper[4772]: I1128 11:08:34.593254 4772 scope.go:117] "RemoveContainer" containerID="125d71e6561215a264909d21c3847fb2269b14c5933345eee667243e9cbf3a4a" Nov 28 11:08:34 crc kubenswrapper[4772]: E1128 11:08:34.593447 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-qsnnj_openshift-multus(a4e5807b-7c14-477e-af8b-1260b997ff17)\"" pod="openshift-multus/multus-qsnnj" podUID="a4e5807b-7c14-477e-af8b-1260b997ff17" Nov 28 11:08:34 crc kubenswrapper[4772]: I1128 11:08:34.993828 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:34 crc kubenswrapper[4772]: I1128 11:08:34.993892 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:34 crc kubenswrapper[4772]: I1128 11:08:34.993970 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:34 crc kubenswrapper[4772]: I1128 11:08:34.994155 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:34 crc kubenswrapper[4772]: E1128 11:08:34.994133 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:34 crc kubenswrapper[4772]: E1128 11:08:34.994227 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:34 crc kubenswrapper[4772]: E1128 11:08:34.994314 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:34 crc kubenswrapper[4772]: E1128 11:08:34.994406 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:35 crc kubenswrapper[4772]: I1128 11:08:35.598103 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qsnnj_a4e5807b-7c14-477e-af8b-1260b997ff17/kube-multus/1.log" Nov 28 11:08:36 crc kubenswrapper[4772]: I1128 11:08:36.994139 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:36 crc kubenswrapper[4772]: I1128 11:08:36.994226 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:36 crc kubenswrapper[4772]: E1128 11:08:36.994411 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:36 crc kubenswrapper[4772]: I1128 11:08:36.994470 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:36 crc kubenswrapper[4772]: E1128 11:08:36.994659 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:36 crc kubenswrapper[4772]: E1128 11:08:36.994738 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:36 crc kubenswrapper[4772]: I1128 11:08:36.995539 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:36 crc kubenswrapper[4772]: E1128 11:08:36.995650 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:38 crc kubenswrapper[4772]: I1128 11:08:38.993820 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:38 crc kubenswrapper[4772]: I1128 11:08:38.993844 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:38 crc kubenswrapper[4772]: I1128 11:08:38.993853 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:38 crc kubenswrapper[4772]: I1128 11:08:38.993949 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:38 crc kubenswrapper[4772]: E1128 11:08:38.994075 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:38 crc kubenswrapper[4772]: E1128 11:08:38.994803 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:38 crc kubenswrapper[4772]: E1128 11:08:38.994836 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:38 crc kubenswrapper[4772]: E1128 11:08:38.994959 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:40 crc kubenswrapper[4772]: I1128 11:08:40.994272 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:40 crc kubenswrapper[4772]: I1128 11:08:40.994309 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:40 crc kubenswrapper[4772]: I1128 11:08:40.994335 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:40 crc kubenswrapper[4772]: I1128 11:08:40.994395 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:40 crc kubenswrapper[4772]: E1128 11:08:40.994648 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:40 crc kubenswrapper[4772]: E1128 11:08:40.994732 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:40 crc kubenswrapper[4772]: E1128 11:08:40.994822 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:40 crc kubenswrapper[4772]: E1128 11:08:40.994997 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:41 crc kubenswrapper[4772]: E1128 11:08:41.942935 4772 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 28 11:08:42 crc kubenswrapper[4772]: E1128 11:08:42.087235 4772 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 11:08:42 crc kubenswrapper[4772]: I1128 11:08:42.993380 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:42 crc kubenswrapper[4772]: I1128 11:08:42.993401 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:42 crc kubenswrapper[4772]: I1128 11:08:42.993505 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:42 crc kubenswrapper[4772]: I1128 11:08:42.993509 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:42 crc kubenswrapper[4772]: E1128 11:08:42.993683 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:42 crc kubenswrapper[4772]: E1128 11:08:42.993775 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:42 crc kubenswrapper[4772]: E1128 11:08:42.993862 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:42 crc kubenswrapper[4772]: E1128 11:08:42.993953 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:43 crc kubenswrapper[4772]: I1128 11:08:43.995093 4772 scope.go:117] "RemoveContainer" containerID="1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f" Nov 28 11:08:44 crc kubenswrapper[4772]: I1128 11:08:44.633797 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/3.log" Nov 28 11:08:44 crc kubenswrapper[4772]: I1128 11:08:44.635991 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerStarted","Data":"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d"} Nov 28 11:08:44 crc kubenswrapper[4772]: I1128 11:08:44.636551 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:08:44 crc kubenswrapper[4772]: I1128 11:08:44.670880 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podStartSLOduration=104.670852617 podStartE2EDuration="1m44.670852617s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:44.670148048 +0000 UTC m=+122.993391295" watchObservedRunningTime="2025-11-28 11:08:44.670852617 +0000 UTC m=+122.994095844" Nov 28 11:08:44 crc kubenswrapper[4772]: I1128 11:08:44.745660 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qstr6"] Nov 28 11:08:44 crc kubenswrapper[4772]: I1128 11:08:44.745791 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:44 crc kubenswrapper[4772]: E1128 11:08:44.745872 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:44 crc kubenswrapper[4772]: I1128 11:08:44.994052 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:44 crc kubenswrapper[4772]: E1128 11:08:44.994213 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:44 crc kubenswrapper[4772]: I1128 11:08:44.994499 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:44 crc kubenswrapper[4772]: I1128 11:08:44.994579 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:44 crc kubenswrapper[4772]: E1128 11:08:44.994625 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:44 crc kubenswrapper[4772]: E1128 11:08:44.994715 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:46 crc kubenswrapper[4772]: I1128 11:08:46.994772 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:46 crc kubenswrapper[4772]: I1128 11:08:46.994809 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:46 crc kubenswrapper[4772]: I1128 11:08:46.994839 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:46 crc kubenswrapper[4772]: I1128 11:08:46.994927 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:46 crc kubenswrapper[4772]: I1128 11:08:46.995187 4772 scope.go:117] "RemoveContainer" containerID="125d71e6561215a264909d21c3847fb2269b14c5933345eee667243e9cbf3a4a" Nov 28 11:08:46 crc kubenswrapper[4772]: E1128 11:08:46.995303 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:46 crc kubenswrapper[4772]: E1128 11:08:46.995184 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:46 crc kubenswrapper[4772]: E1128 11:08:46.995550 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:46 crc kubenswrapper[4772]: E1128 11:08:46.995809 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:47 crc kubenswrapper[4772]: E1128 11:08:47.088233 4772 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 11:08:47 crc kubenswrapper[4772]: I1128 11:08:47.647388 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qsnnj_a4e5807b-7c14-477e-af8b-1260b997ff17/kube-multus/1.log" Nov 28 11:08:47 crc kubenswrapper[4772]: I1128 11:08:47.647441 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qsnnj" event={"ID":"a4e5807b-7c14-477e-af8b-1260b997ff17","Type":"ContainerStarted","Data":"252b4a3f25f72207c739fb18e3bec006c661da277d345c7af2069279d0879002"} Nov 28 11:08:48 crc kubenswrapper[4772]: I1128 11:08:48.994282 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:48 crc kubenswrapper[4772]: I1128 11:08:48.994296 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:48 crc kubenswrapper[4772]: E1128 11:08:48.994426 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:48 crc kubenswrapper[4772]: I1128 11:08:48.994447 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:48 crc kubenswrapper[4772]: I1128 11:08:48.994300 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:48 crc kubenswrapper[4772]: E1128 11:08:48.994528 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:48 crc kubenswrapper[4772]: E1128 11:08:48.994595 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:48 crc kubenswrapper[4772]: E1128 11:08:48.994754 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:50 crc kubenswrapper[4772]: I1128 11:08:50.993790 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:50 crc kubenswrapper[4772]: I1128 11:08:50.993816 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:50 crc kubenswrapper[4772]: I1128 11:08:50.993815 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:50 crc kubenswrapper[4772]: I1128 11:08:50.993811 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:50 crc kubenswrapper[4772]: E1128 11:08:50.993928 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qstr6" podUID="def9b3ab-2dc8-4f40-9d6b-346f9cdbc386" Nov 28 11:08:50 crc kubenswrapper[4772]: E1128 11:08:50.994007 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:08:50 crc kubenswrapper[4772]: E1128 11:08:50.994104 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:08:50 crc kubenswrapper[4772]: E1128 11:08:50.994253 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:08:52 crc kubenswrapper[4772]: I1128 11:08:52.993981 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:08:52 crc kubenswrapper[4772]: I1128 11:08:52.994039 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:08:52 crc kubenswrapper[4772]: I1128 11:08:52.994069 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:08:52 crc kubenswrapper[4772]: I1128 11:08:52.994113 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:08:52 crc kubenswrapper[4772]: I1128 11:08:52.999066 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.003234 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.003609 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.003783 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.004022 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.003273 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.318819 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.353230 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xntkm"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.353616 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.361722 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.362107 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.362241 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.362391 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.365375 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4ct44"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.365846 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.366022 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.366376 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.367648 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4dx8r"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.368062 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.368329 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.368596 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.372078 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.373203 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.373703 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l8xps"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.374074 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.374285 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.374376 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.374660 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.374679 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pjlzq"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.375030 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.375047 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.375379 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.375441 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.375579 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.375661 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.375770 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.375597 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.375882 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.375677 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.376937 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.377122 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.377564 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.377963 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.378253 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.378796 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.379257 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.380849 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.381191 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.381415 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.381632 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.381773 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.386263 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pv78h"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.386884 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.387168 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.392593 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.393137 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.393430 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.393513 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.393608 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.393143 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.393666 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.393741 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.393822 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.393845 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.393674 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.393971 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.393971 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.394237 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.395230 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.395525 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.395730 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.395989 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.396071 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.396118 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.396226 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.396329 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.396454 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.396535 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xq8v7"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.396959 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.397132 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.397177 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.397195 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-p6f5q"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.397419 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.397380 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.397423 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.397433 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.407853 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.409461 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.409876 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.412140 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.435912 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.437171 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.437623 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.438518 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.438652 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.438742 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.438831 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.439088 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pgb48"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.439511 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.439573 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.440217 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.440450 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.441036 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.441150 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.441242 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.441327 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.441411 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.441465 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.443613 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.443646 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.443674 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.443735 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.443904 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.444078 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.444167 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.444458 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.444728 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.445540 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.445576 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.445754 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.445813 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.445874 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.445933 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.446081 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.446129 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.446234 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.446390 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.446438 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.446547 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.446569 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.446236 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.446678 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.446776 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.446912 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.446091 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.447115 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.450200 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.450412 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.450952 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.451994 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2dg24"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.452796 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dg24" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.456976 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.458484 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.460506 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.460951 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461045 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461070 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461115 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461169 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461185 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461288 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461290 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461402 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461408 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461345 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461465 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461549 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.461689 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.462213 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.462349 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.462656 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.462774 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.464889 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.471967 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-wwxmx"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.472349 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-xffg6"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.481784 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bftgd"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.481888 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wwxmx" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.482028 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.482115 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.484400 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bftgd" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.485289 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-encryption-config\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.485322 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t56jk\" (UniqueName: \"kubernetes.io/projected/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-kube-api-access-t56jk\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.485413 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e42674-84df-48c0-a77f-35afeb169848-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztvsg\" (UID: \"f6e42674-84df-48c0-a77f-35afeb169848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.485438 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d593cf3a-ced7-4f3a-a15a-10c3309a2ee3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4ct44\" (UID: \"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.485462 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e42674-84df-48c0-a77f-35afeb169848-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztvsg\" (UID: \"f6e42674-84df-48c0-a77f-35afeb169848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.485706 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vsrmr"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.485757 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-audit\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.492963 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-client-ca\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499120 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8366b951-5122-40d9-b665-a3629d747906-config\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499316 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8366b951-5122-40d9-b665-a3629d747906-service-ca-bundle\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499385 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3d3606e-c8a1-4c96-b66c-14ec79268ef1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-c5t26\" (UID: \"b3d3606e-c8a1-4c96-b66c-14ec79268ef1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499400 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499418 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88914887-f24a-4852-9a3e-603b1db2b5b5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b5gcb\" (UID: \"88914887-f24a-4852-9a3e-603b1db2b5b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499442 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88914887-f24a-4852-9a3e-603b1db2b5b5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b5gcb\" (UID: \"88914887-f24a-4852-9a3e-603b1db2b5b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499471 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a94fd25b-ca6a-4afe-922a-61ebfba248ed-serving-cert\") pod \"route-controller-manager-6576b87f9c-mfkft\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499494 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/98dacb5b-7077-4230-a992-5370a0f0f44e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pv78h\" (UID: \"98dacb5b-7077-4230-a992-5370a0f0f44e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499535 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-audit-dir\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499560 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a94fd25b-ca6a-4afe-922a-61ebfba248ed-client-ca\") pod \"route-controller-manager-6576b87f9c-mfkft\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499620 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtrz2\" (UniqueName: \"kubernetes.io/projected/98dacb5b-7077-4230-a992-5370a0f0f44e-kube-api-access-vtrz2\") pod \"openshift-config-operator-7777fb866f-pv78h\" (UID: \"98dacb5b-7077-4230-a992-5370a0f0f44e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499647 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/93e2a709-a456-4a26-a483-3f1ece08f4fe-audit-dir\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499668 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltddk\" (UniqueName: \"kubernetes.io/projected/e836396e-bc34-4178-aa3e-94ce5799b2fa-kube-api-access-ltddk\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499703 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/29522aa2-ad8b-4fbe-b872-3abe137e7676-srv-cert\") pod \"catalog-operator-68c6474976-wt2vm\" (UID: \"29522aa2-ad8b-4fbe-b872-3abe137e7676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499727 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqd2f\" (UniqueName: \"kubernetes.io/projected/a94fd25b-ca6a-4afe-922a-61ebfba248ed-kube-api-access-xqd2f\") pod \"route-controller-manager-6576b87f9c-mfkft\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499819 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-etcd-serving-ca\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499845 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f10a259-c46a-4325-8b94-133ebbc6041a-service-ca-bundle\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499879 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98dacb5b-7077-4230-a992-5370a0f0f44e-serving-cert\") pod \"openshift-config-operator-7777fb866f-pv78h\" (UID: \"98dacb5b-7077-4230-a992-5370a0f0f44e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499900 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-config\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499920 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9f10a259-c46a-4325-8b94-133ebbc6041a-stats-auth\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499943 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d593cf3a-ced7-4f3a-a15a-10c3309a2ee3-config\") pod \"machine-api-operator-5694c8668f-4ct44\" (UID: \"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499965 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8366b951-5122-40d9-b665-a3629d747906-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499985 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srqh6\" (UniqueName: \"kubernetes.io/projected/b3d3606e-c8a1-4c96-b66c-14ec79268ef1-kube-api-access-srqh6\") pod \"kube-storage-version-migrator-operator-b67b599dd-c5t26\" (UID: \"b3d3606e-c8a1-4c96-b66c-14ec79268ef1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.499996 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500003 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56hlk\" (UniqueName: \"kubernetes.io/projected/8366b951-5122-40d9-b665-a3629d747906-kube-api-access-56hlk\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500150 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a94fd25b-ca6a-4afe-922a-61ebfba248ed-config\") pod \"route-controller-manager-6576b87f9c-mfkft\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500187 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88914887-f24a-4852-9a3e-603b1db2b5b5-config\") pod \"kube-controller-manager-operator-78b949d7b-b5gcb\" (UID: \"88914887-f24a-4852-9a3e-603b1db2b5b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500209 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxbm\" (UniqueName: \"kubernetes.io/projected/93e2a709-a456-4a26-a483-3f1ece08f4fe-kube-api-access-nfxbm\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500224 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500239 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/29522aa2-ad8b-4fbe-b872-3abe137e7676-profile-collector-cert\") pod \"catalog-operator-68c6474976-wt2vm\" (UID: \"29522aa2-ad8b-4fbe-b872-3abe137e7676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500270 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f10a259-c46a-4325-8b94-133ebbc6041a-metrics-certs\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500285 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-serving-cert\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500307 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9hw\" (UniqueName: \"kubernetes.io/projected/9f10a259-c46a-4325-8b94-133ebbc6041a-kube-api-access-4f9hw\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500324 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8366b951-5122-40d9-b665-a3629d747906-serving-cert\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500345 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbwph\" (UniqueName: \"kubernetes.io/projected/d827072f-b633-42a9-b6c0-1f515508f488-kube-api-access-xbwph\") pod \"cluster-samples-operator-665b6dd947-cltd7\" (UID: \"d827072f-b633-42a9-b6c0-1f515508f488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500386 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d593cf3a-ced7-4f3a-a15a-10c3309a2ee3-images\") pod \"machine-api-operator-5694c8668f-4ct44\" (UID: \"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500419 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-node-pullsecrets\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500443 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500478 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/93e2a709-a456-4a26-a483-3f1ece08f4fe-etcd-client\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500502 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93e2a709-a456-4a26-a483-3f1ece08f4fe-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500522 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9f10a259-c46a-4325-8b94-133ebbc6041a-default-certificate\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500597 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-config\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500640 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d3606e-c8a1-4c96-b66c-14ec79268ef1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-c5t26\" (UID: \"b3d3606e-c8a1-4c96-b66c-14ec79268ef1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500672 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/93e2a709-a456-4a26-a483-3f1ece08f4fe-audit-policies\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500690 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l98m\" (UniqueName: \"kubernetes.io/projected/f6e42674-84df-48c0-a77f-35afeb169848-kube-api-access-5l98m\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztvsg\" (UID: \"f6e42674-84df-48c0-a77f-35afeb169848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500708 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xfr7\" (UniqueName: \"kubernetes.io/projected/29522aa2-ad8b-4fbe-b872-3abe137e7676-kube-api-access-5xfr7\") pod \"catalog-operator-68c6474976-wt2vm\" (UID: \"29522aa2-ad8b-4fbe-b872-3abe137e7676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500723 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e2a709-a456-4a26-a483-3f1ece08f4fe-serving-cert\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500757 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-image-import-ca\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500810 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e836396e-bc34-4178-aa3e-94ce5799b2fa-serving-cert\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500842 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5zx4\" (UniqueName: \"kubernetes.io/projected/d593cf3a-ced7-4f3a-a15a-10c3309a2ee3-kube-api-access-k5zx4\") pod \"machine-api-operator-5694c8668f-4ct44\" (UID: \"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500891 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/93e2a709-a456-4a26-a483-3f1ece08f4fe-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500913 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-etcd-client\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500933 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d827072f-b633-42a9-b6c0-1f515508f488-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cltd7\" (UID: \"d827072f-b633-42a9-b6c0-1f515508f488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.500953 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/93e2a709-a456-4a26-a483-3f1ece08f4fe-encryption-config\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.503352 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.503759 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n9b9t"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.504238 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.504534 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.504620 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.504824 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n9b9t" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.505561 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tfv7p"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.506395 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.508347 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.509043 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.510519 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.511024 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.512614 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.513440 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xntkm"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.513574 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.514485 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l8xps"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.515847 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.516351 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.517405 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.519731 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rnhdf"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.520681 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4dx8r"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.520721 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.520828 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.522333 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.523937 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-prq4h"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.524783 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xq8v7"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.524881 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-prq4h" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.525970 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.527518 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.528195 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.529336 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-p6f5q"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.531099 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4ct44"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.535601 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.535656 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pv78h"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.535668 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.538401 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2dg24"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.538446 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.541540 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.541576 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.541589 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.541791 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.543214 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.543985 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bftgd"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.548491 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xffg6"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.548537 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.548552 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wwxmx"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.550258 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rnhdf"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.552461 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tfv7p"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.553143 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.555877 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.569615 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.572496 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pgb48"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.573611 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.575034 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vsrmr"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.575651 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n9b9t"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.576675 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bqx72"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.576852 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.579142 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-prq4h"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.579445 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bqx72" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.579540 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tbqqf"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.580155 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tbqqf" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.584672 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.595291 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tbqqf"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.595550 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sps4h"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.596878 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.598661 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.605948 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfxbm\" (UniqueName: \"kubernetes.io/projected/93e2a709-a456-4a26-a483-3f1ece08f4fe-kube-api-access-nfxbm\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.605999 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606022 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/29522aa2-ad8b-4fbe-b872-3abe137e7676-profile-collector-cert\") pod \"catalog-operator-68c6474976-wt2vm\" (UID: \"29522aa2-ad8b-4fbe-b872-3abe137e7676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606056 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f10a259-c46a-4325-8b94-133ebbc6041a-metrics-certs\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606072 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-serving-cert\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606089 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f9hw\" (UniqueName: \"kubernetes.io/projected/9f10a259-c46a-4325-8b94-133ebbc6041a-kube-api-access-4f9hw\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606107 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8366b951-5122-40d9-b665-a3629d747906-serving-cert\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606127 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbwph\" (UniqueName: \"kubernetes.io/projected/d827072f-b633-42a9-b6c0-1f515508f488-kube-api-access-xbwph\") pod \"cluster-samples-operator-665b6dd947-cltd7\" (UID: \"d827072f-b633-42a9-b6c0-1f515508f488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606152 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a5abbb76-72b7-477e-8c59-d74fc3333188-etcd-ca\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606174 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5abbb76-72b7-477e-8c59-d74fc3333188-serving-cert\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606194 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d593cf3a-ced7-4f3a-a15a-10c3309a2ee3-images\") pod \"machine-api-operator-5694c8668f-4ct44\" (UID: \"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606213 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-node-pullsecrets\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606232 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606256 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/93e2a709-a456-4a26-a483-3f1ece08f4fe-etcd-client\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606271 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93e2a709-a456-4a26-a483-3f1ece08f4fe-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606289 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9f10a259-c46a-4325-8b94-133ebbc6041a-default-certificate\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606308 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a5abbb76-72b7-477e-8c59-d74fc3333188-etcd-service-ca\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606328 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-config\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606346 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d3606e-c8a1-4c96-b66c-14ec79268ef1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-c5t26\" (UID: \"b3d3606e-c8a1-4c96-b66c-14ec79268ef1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606385 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/93e2a709-a456-4a26-a483-3f1ece08f4fe-audit-policies\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606408 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l98m\" (UniqueName: \"kubernetes.io/projected/f6e42674-84df-48c0-a77f-35afeb169848-kube-api-access-5l98m\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztvsg\" (UID: \"f6e42674-84df-48c0-a77f-35afeb169848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606428 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xfr7\" (UniqueName: \"kubernetes.io/projected/29522aa2-ad8b-4fbe-b872-3abe137e7676-kube-api-access-5xfr7\") pod \"catalog-operator-68c6474976-wt2vm\" (UID: \"29522aa2-ad8b-4fbe-b872-3abe137e7676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606459 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e2a709-a456-4a26-a483-3f1ece08f4fe-serving-cert\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606479 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e836396e-bc34-4178-aa3e-94ce5799b2fa-serving-cert\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606501 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-image-import-ca\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606541 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5zx4\" (UniqueName: \"kubernetes.io/projected/d593cf3a-ced7-4f3a-a15a-10c3309a2ee3-kube-api-access-k5zx4\") pod \"machine-api-operator-5694c8668f-4ct44\" (UID: \"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606561 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/93e2a709-a456-4a26-a483-3f1ece08f4fe-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606578 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-etcd-client\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606596 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d827072f-b633-42a9-b6c0-1f515508f488-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cltd7\" (UID: \"d827072f-b633-42a9-b6c0-1f515508f488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606617 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/93e2a709-a456-4a26-a483-3f1ece08f4fe-encryption-config\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606651 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a5abbb76-72b7-477e-8c59-d74fc3333188-etcd-client\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606675 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-encryption-config\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606697 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t56jk\" (UniqueName: \"kubernetes.io/projected/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-kube-api-access-t56jk\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606716 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e42674-84df-48c0-a77f-35afeb169848-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztvsg\" (UID: \"f6e42674-84df-48c0-a77f-35afeb169848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606734 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e42674-84df-48c0-a77f-35afeb169848-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztvsg\" (UID: \"f6e42674-84df-48c0-a77f-35afeb169848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606752 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-audit\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606771 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d593cf3a-ced7-4f3a-a15a-10c3309a2ee3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4ct44\" (UID: \"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606795 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5abbb76-72b7-477e-8c59-d74fc3333188-config\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606816 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-client-ca\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606834 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8366b951-5122-40d9-b665-a3629d747906-config\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606853 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8366b951-5122-40d9-b665-a3629d747906-service-ca-bundle\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606879 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3d3606e-c8a1-4c96-b66c-14ec79268ef1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-c5t26\" (UID: \"b3d3606e-c8a1-4c96-b66c-14ec79268ef1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606898 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88914887-f24a-4852-9a3e-603b1db2b5b5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b5gcb\" (UID: \"88914887-f24a-4852-9a3e-603b1db2b5b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606918 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88914887-f24a-4852-9a3e-603b1db2b5b5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b5gcb\" (UID: \"88914887-f24a-4852-9a3e-603b1db2b5b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606938 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a94fd25b-ca6a-4afe-922a-61ebfba248ed-serving-cert\") pod \"route-controller-manager-6576b87f9c-mfkft\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606960 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/98dacb5b-7077-4230-a992-5370a0f0f44e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pv78h\" (UID: \"98dacb5b-7077-4230-a992-5370a0f0f44e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606982 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-audit-dir\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.606999 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a94fd25b-ca6a-4afe-922a-61ebfba248ed-client-ca\") pod \"route-controller-manager-6576b87f9c-mfkft\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607017 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtrz2\" (UniqueName: \"kubernetes.io/projected/98dacb5b-7077-4230-a992-5370a0f0f44e-kube-api-access-vtrz2\") pod \"openshift-config-operator-7777fb866f-pv78h\" (UID: \"98dacb5b-7077-4230-a992-5370a0f0f44e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607039 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgfb7\" (UniqueName: \"kubernetes.io/projected/a5abbb76-72b7-477e-8c59-d74fc3333188-kube-api-access-dgfb7\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607055 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/93e2a709-a456-4a26-a483-3f1ece08f4fe-audit-dir\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607073 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltddk\" (UniqueName: \"kubernetes.io/projected/e836396e-bc34-4178-aa3e-94ce5799b2fa-kube-api-access-ltddk\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607094 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqd2f\" (UniqueName: \"kubernetes.io/projected/a94fd25b-ca6a-4afe-922a-61ebfba248ed-kube-api-access-xqd2f\") pod \"route-controller-manager-6576b87f9c-mfkft\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607112 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-etcd-serving-ca\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607128 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f10a259-c46a-4325-8b94-133ebbc6041a-service-ca-bundle\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607145 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/29522aa2-ad8b-4fbe-b872-3abe137e7676-srv-cert\") pod \"catalog-operator-68c6474976-wt2vm\" (UID: \"29522aa2-ad8b-4fbe-b872-3abe137e7676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607168 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98dacb5b-7077-4230-a992-5370a0f0f44e-serving-cert\") pod \"openshift-config-operator-7777fb866f-pv78h\" (UID: \"98dacb5b-7077-4230-a992-5370a0f0f44e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607187 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-config\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607206 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9f10a259-c46a-4325-8b94-133ebbc6041a-stats-auth\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607225 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d593cf3a-ced7-4f3a-a15a-10c3309a2ee3-config\") pod \"machine-api-operator-5694c8668f-4ct44\" (UID: \"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607246 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8366b951-5122-40d9-b665-a3629d747906-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607268 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srqh6\" (UniqueName: \"kubernetes.io/projected/b3d3606e-c8a1-4c96-b66c-14ec79268ef1-kube-api-access-srqh6\") pod \"kube-storage-version-migrator-operator-b67b599dd-c5t26\" (UID: \"b3d3606e-c8a1-4c96-b66c-14ec79268ef1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607287 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a94fd25b-ca6a-4afe-922a-61ebfba248ed-config\") pod \"route-controller-manager-6576b87f9c-mfkft\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607314 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88914887-f24a-4852-9a3e-603b1db2b5b5-config\") pod \"kube-controller-manager-operator-78b949d7b-b5gcb\" (UID: \"88914887-f24a-4852-9a3e-603b1db2b5b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607341 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56hlk\" (UniqueName: \"kubernetes.io/projected/8366b951-5122-40d9-b665-a3629d747906-kube-api-access-56hlk\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.607869 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sps4h"] Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.609819 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e42674-84df-48c0-a77f-35afeb169848-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztvsg\" (UID: \"f6e42674-84df-48c0-a77f-35afeb169848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.611030 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-audit\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.611707 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-node-pullsecrets\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.612781 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d593cf3a-ced7-4f3a-a15a-10c3309a2ee3-images\") pod \"machine-api-operator-5694c8668f-4ct44\" (UID: \"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.612990 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.613892 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e42674-84df-48c0-a77f-35afeb169848-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztvsg\" (UID: \"f6e42674-84df-48c0-a77f-35afeb169848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.613980 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-audit-dir\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.614037 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9f10a259-c46a-4325-8b94-133ebbc6041a-default-certificate\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.614914 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a94fd25b-ca6a-4afe-922a-61ebfba248ed-client-ca\") pod \"route-controller-manager-6576b87f9c-mfkft\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.615323 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.615416 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-image-import-ca\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.615913 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-etcd-client\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.615982 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/93e2a709-a456-4a26-a483-3f1ece08f4fe-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.616093 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/93e2a709-a456-4a26-a483-3f1ece08f4fe-audit-dir\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.616607 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-client-ca\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.616809 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d593cf3a-ced7-4f3a-a15a-10c3309a2ee3-config\") pod \"machine-api-operator-5694c8668f-4ct44\" (UID: \"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.616866 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88914887-f24a-4852-9a3e-603b1db2b5b5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b5gcb\" (UID: \"88914887-f24a-4852-9a3e-603b1db2b5b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.617240 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/98dacb5b-7077-4230-a992-5370a0f0f44e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pv78h\" (UID: \"98dacb5b-7077-4230-a992-5370a0f0f44e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.617400 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-etcd-serving-ca\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.617596 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.617818 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8366b951-5122-40d9-b665-a3629d747906-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.617877 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-config\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.617946 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8366b951-5122-40d9-b665-a3629d747906-service-ca-bundle\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.618056 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a94fd25b-ca6a-4afe-922a-61ebfba248ed-config\") pod \"route-controller-manager-6576b87f9c-mfkft\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.618153 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93e2a709-a456-4a26-a483-3f1ece08f4fe-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.618509 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d827072f-b633-42a9-b6c0-1f515508f488-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cltd7\" (UID: \"d827072f-b633-42a9-b6c0-1f515508f488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.618545 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8366b951-5122-40d9-b665-a3629d747906-config\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.618569 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/93e2a709-a456-4a26-a483-3f1ece08f4fe-audit-policies\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.619681 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-config\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.619813 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d3606e-c8a1-4c96-b66c-14ec79268ef1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-c5t26\" (UID: \"b3d3606e-c8a1-4c96-b66c-14ec79268ef1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.619892 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/29522aa2-ad8b-4fbe-b872-3abe137e7676-profile-collector-cert\") pod \"catalog-operator-68c6474976-wt2vm\" (UID: \"29522aa2-ad8b-4fbe-b872-3abe137e7676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.620107 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88914887-f24a-4852-9a3e-603b1db2b5b5-config\") pod \"kube-controller-manager-operator-78b949d7b-b5gcb\" (UID: \"88914887-f24a-4852-9a3e-603b1db2b5b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.620668 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3d3606e-c8a1-4c96-b66c-14ec79268ef1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-c5t26\" (UID: \"b3d3606e-c8a1-4c96-b66c-14ec79268ef1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.622150 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e2a709-a456-4a26-a483-3f1ece08f4fe-serving-cert\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.624140 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/29522aa2-ad8b-4fbe-b872-3abe137e7676-srv-cert\") pod \"catalog-operator-68c6474976-wt2vm\" (UID: \"29522aa2-ad8b-4fbe-b872-3abe137e7676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.624977 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f10a259-c46a-4325-8b94-133ebbc6041a-metrics-certs\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.625144 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/93e2a709-a456-4a26-a483-3f1ece08f4fe-etcd-client\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.625266 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a94fd25b-ca6a-4afe-922a-61ebfba248ed-serving-cert\") pod \"route-controller-manager-6576b87f9c-mfkft\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.625292 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8366b951-5122-40d9-b665-a3629d747906-serving-cert\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.625415 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98dacb5b-7077-4230-a992-5370a0f0f44e-serving-cert\") pod \"openshift-config-operator-7777fb866f-pv78h\" (UID: \"98dacb5b-7077-4230-a992-5370a0f0f44e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.625821 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9f10a259-c46a-4325-8b94-133ebbc6041a-stats-auth\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.626186 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e836396e-bc34-4178-aa3e-94ce5799b2fa-serving-cert\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.626263 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-encryption-config\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.626653 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/93e2a709-a456-4a26-a483-3f1ece08f4fe-encryption-config\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.628187 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-serving-cert\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.630026 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f10a259-c46a-4325-8b94-133ebbc6041a-service-ca-bundle\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.633520 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d593cf3a-ced7-4f3a-a15a-10c3309a2ee3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4ct44\" (UID: \"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.636054 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.676826 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.697010 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.708770 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a5abbb76-72b7-477e-8c59-d74fc3333188-etcd-ca\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.708811 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5abbb76-72b7-477e-8c59-d74fc3333188-serving-cert\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.708846 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a5abbb76-72b7-477e-8c59-d74fc3333188-etcd-service-ca\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.708932 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a5abbb76-72b7-477e-8c59-d74fc3333188-etcd-client\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.708969 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5abbb76-72b7-477e-8c59-d74fc3333188-config\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.709011 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgfb7\" (UniqueName: \"kubernetes.io/projected/a5abbb76-72b7-477e-8c59-d74fc3333188-kube-api-access-dgfb7\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.709672 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a5abbb76-72b7-477e-8c59-d74fc3333188-etcd-ca\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.710348 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5abbb76-72b7-477e-8c59-d74fc3333188-config\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.710414 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a5abbb76-72b7-477e-8c59-d74fc3333188-etcd-service-ca\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.712131 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5abbb76-72b7-477e-8c59-d74fc3333188-serving-cert\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.712732 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a5abbb76-72b7-477e-8c59-d74fc3333188-etcd-client\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.716412 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.736406 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.756320 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.776318 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.796659 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.816758 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.836106 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.857385 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.875906 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.897208 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.916118 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.936607 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.956240 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.975971 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 11:08:53 crc kubenswrapper[4772]: I1128 11:08:53.995354 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.016300 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.036143 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.055291 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.075426 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.095851 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.116293 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.136759 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.156779 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.181537 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.195639 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.215708 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.236099 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.255624 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.276782 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.316969 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.345485 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.370984 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.376080 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.396119 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.415592 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.437497 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.455898 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.477136 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.496557 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.514006 4772 request.go:700] Waited for 1.013601139s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&limit=500&resourceVersion=0 Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.515844 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.537143 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.569788 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.576657 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.596273 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.616604 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.636029 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.656233 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.676772 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.697620 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.716640 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.736766 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.758727 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.777287 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.798109 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.817019 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.835722 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.856583 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.876644 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.896738 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.916207 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.937950 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.957635 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.977129 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 11:08:54 crc kubenswrapper[4772]: I1128 11:08:54.995688 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.017270 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.037028 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.055382 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.076756 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.096273 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.126414 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.136372 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.155902 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.176875 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.196731 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.215812 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.235659 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.255872 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.275473 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.295990 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.315851 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.337092 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.356133 4772 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.375983 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.396392 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.442143 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56hlk\" (UniqueName: \"kubernetes.io/projected/8366b951-5122-40d9-b665-a3629d747906-kube-api-access-56hlk\") pod \"authentication-operator-69f744f599-xntkm\" (UID: \"8366b951-5122-40d9-b665-a3629d747906\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.456192 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t56jk\" (UniqueName: \"kubernetes.io/projected/7a4baf8b-1e57-43cb-972a-b3ad73d3a192-kube-api-access-t56jk\") pod \"apiserver-76f77b778f-4dx8r\" (UID: \"7a4baf8b-1e57-43cb-972a-b3ad73d3a192\") " pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.483199 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtrz2\" (UniqueName: \"kubernetes.io/projected/98dacb5b-7077-4230-a992-5370a0f0f44e-kube-api-access-vtrz2\") pod \"openshift-config-operator-7777fb866f-pv78h\" (UID: \"98dacb5b-7077-4230-a992-5370a0f0f44e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.496799 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbwph\" (UniqueName: \"kubernetes.io/projected/d827072f-b633-42a9-b6c0-1f515508f488-kube-api-access-xbwph\") pod \"cluster-samples-operator-665b6dd947-cltd7\" (UID: \"d827072f-b633-42a9-b6c0-1f515508f488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.514651 4772 request.go:700] Waited for 1.899121967s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.516458 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfxbm\" (UniqueName: \"kubernetes.io/projected/93e2a709-a456-4a26-a483-3f1ece08f4fe-kube-api-access-nfxbm\") pod \"apiserver-7bbb656c7d-2lkxn\" (UID: \"93e2a709-a456-4a26-a483-3f1ece08f4fe\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.532462 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5zx4\" (UniqueName: \"kubernetes.io/projected/d593cf3a-ced7-4f3a-a15a-10c3309a2ee3-kube-api-access-k5zx4\") pod \"machine-api-operator-5694c8668f-4ct44\" (UID: \"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.547027 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.556664 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqd2f\" (UniqueName: \"kubernetes.io/projected/a94fd25b-ca6a-4afe-922a-61ebfba248ed-kube-api-access-xqd2f\") pod \"route-controller-manager-6576b87f9c-mfkft\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.564646 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.581669 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88914887-f24a-4852-9a3e-603b1db2b5b5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b5gcb\" (UID: \"88914887-f24a-4852-9a3e-603b1db2b5b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.585591 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.600755 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltddk\" (UniqueName: \"kubernetes.io/projected/e836396e-bc34-4178-aa3e-94ce5799b2fa-kube-api-access-ltddk\") pod \"controller-manager-879f6c89f-l8xps\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.611500 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.614047 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srqh6\" (UniqueName: \"kubernetes.io/projected/b3d3606e-c8a1-4c96-b66c-14ec79268ef1-kube-api-access-srqh6\") pod \"kube-storage-version-migrator-operator-b67b599dd-c5t26\" (UID: \"b3d3606e-c8a1-4c96-b66c-14ec79268ef1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.620641 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.635519 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xfr7\" (UniqueName: \"kubernetes.io/projected/29522aa2-ad8b-4fbe-b872-3abe137e7676-kube-api-access-5xfr7\") pod \"catalog-operator-68c6474976-wt2vm\" (UID: \"29522aa2-ad8b-4fbe-b872-3abe137e7676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.651974 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f9hw\" (UniqueName: \"kubernetes.io/projected/9f10a259-c46a-4325-8b94-133ebbc6041a-kube-api-access-4f9hw\") pod \"router-default-5444994796-pjlzq\" (UID: \"9f10a259-c46a-4325-8b94-133ebbc6041a\") " pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.653374 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.654727 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.666271 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.674973 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l98m\" (UniqueName: \"kubernetes.io/projected/f6e42674-84df-48c0-a77f-35afeb169848-kube-api-access-5l98m\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztvsg\" (UID: \"f6e42674-84df-48c0-a77f-35afeb169848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.705216 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.722789 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.732086 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgfb7\" (UniqueName: \"kubernetes.io/projected/a5abbb76-72b7-477e-8c59-d74fc3333188-kube-api-access-dgfb7\") pod \"etcd-operator-b45778765-p6f5q\" (UID: \"a5abbb76-72b7-477e-8c59-d74fc3333188\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.732654 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-bound-sa-token\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.732685 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facbc367-fb0c-466a-a01e-8f1118db6fcb-config\") pod \"kube-apiserver-operator-766d6c64bb-n6lq9\" (UID: \"facbc367-fb0c-466a-a01e-8f1118db6fcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.732708 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01678d74-64f0-4bee-b900-6dd92b577842-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.732747 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-oauth-serving-cert\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.732801 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghl8w\" (UniqueName: \"kubernetes.io/projected/048a22d4-da9f-4fc6-837a-c5398965a0f0-kube-api-access-ghl8w\") pod \"migrator-59844c95c7-2dg24\" (UID: \"048a22d4-da9f-4fc6-837a-c5398965a0f0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dg24" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.732856 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.732894 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbtcd\" (UniqueName: \"kubernetes.io/projected/a44f7c6d-d509-4a18-9fd9-47419d54af4b-kube-api-access-xbtcd\") pod \"cluster-image-registry-operator-dc59b4c8b-6stnj\" (UID: \"a44f7c6d-d509-4a18-9fd9-47419d54af4b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.732919 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8hrc\" (UniqueName: \"kubernetes.io/projected/f18077e0-f3d3-4327-a384-321e3d9b9f78-kube-api-access-x8hrc\") pod \"machine-config-operator-74547568cd-7bspm\" (UID: \"f18077e0-f3d3-4327-a384-321e3d9b9f78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:55 crc kubenswrapper[4772]: E1128 11:08:55.733279 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:56.23326478 +0000 UTC m=+134.556508087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.733721 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psmlt\" (UniqueName: \"kubernetes.io/projected/ee7591a8-5999-4c4f-a0c8-de4fc90ac59d-kube-api-access-psmlt\") pod \"machine-approver-56656f9798-g6nz6\" (UID: \"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.733827 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cngz\" (UniqueName: \"kubernetes.io/projected/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-kube-api-access-6cngz\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.733857 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a44f7c6d-d509-4a18-9fd9-47419d54af4b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6stnj\" (UID: \"a44f7c6d-d509-4a18-9fd9-47419d54af4b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.733879 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6310d4f4-cea5-4b44-a388-193d13bc5ec7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dttbx\" (UID: \"6310d4f4-cea5-4b44-a388-193d13bc5ec7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.733896 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l92fb\" (UniqueName: \"kubernetes.io/projected/b36f0f60-82ce-49f5-b23b-2f585244e8db-kube-api-access-l92fb\") pod \"ingress-operator-5b745b69d9-k8l6n\" (UID: \"b36f0f60-82ce-49f5-b23b-2f585244e8db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.733912 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bftgd\" (UID: \"a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bftgd" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734087 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-audit-policies\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734109 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734146 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b29fa00-3205-4f7c-8f5f-671c7921029b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w8l8k\" (UID: \"8b29fa00-3205-4f7c-8f5f-671c7921029b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734164 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pj6t\" (UniqueName: \"kubernetes.io/projected/d3999e53-dcc7-4e16-8463-de30fb7fcbf6-kube-api-access-2pj6t\") pod \"machine-config-controller-84d6567774-r2cqm\" (UID: \"d3999e53-dcc7-4e16-8463-de30fb7fcbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734178 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6310d4f4-cea5-4b44-a388-193d13bc5ec7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dttbx\" (UID: \"6310d4f4-cea5-4b44-a388-193d13bc5ec7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734242 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01678d74-64f0-4bee-b900-6dd92b577842-registry-certificates\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734259 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734273 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734289 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/facbc367-fb0c-466a-a01e-8f1118db6fcb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n6lq9\" (UID: \"facbc367-fb0c-466a-a01e-8f1118db6fcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734439 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-service-ca\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734456 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6310d4f4-cea5-4b44-a388-193d13bc5ec7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dttbx\" (UID: \"6310d4f4-cea5-4b44-a388-193d13bc5ec7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734472 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e4ff33b-3159-4449-87fa-36031498cfbc-config\") pod \"console-operator-58897d9998-xq8v7\" (UID: \"8e4ff33b-3159-4449-87fa-36031498cfbc\") " pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734491 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5tbn\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-kube-api-access-p5tbn\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734505 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734520 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7f9f\" (UniqueName: \"kubernetes.io/projected/9078e378-b549-4e5e-82ee-b9a96cf8e4da-kube-api-access-d7f9f\") pod \"downloads-7954f5f757-wwxmx\" (UID: \"9078e378-b549-4e5e-82ee-b9a96cf8e4da\") " pod="openshift-console/downloads-7954f5f757-wwxmx" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734567 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-config\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734707 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee7591a8-5999-4c4f-a0c8-de4fc90ac59d-auth-proxy-config\") pod \"machine-approver-56656f9798-g6nz6\" (UID: \"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734724 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a44f7c6d-d509-4a18-9fd9-47419d54af4b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6stnj\" (UID: \"a44f7c6d-d509-4a18-9fd9-47419d54af4b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734740 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fdd9\" (UniqueName: \"kubernetes.io/projected/f1ac9e8a-e68c-4d22-8841-3f36863b0574-kube-api-access-5fdd9\") pod \"package-server-manager-789f6589d5-mlfhh\" (UID: \"f1ac9e8a-e68c-4d22-8841-3f36863b0574\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734754 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ee7591a8-5999-4c4f-a0c8-de4fc90ac59d-machine-approver-tls\") pod \"machine-approver-56656f9798-g6nz6\" (UID: \"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734769 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734794 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-oauth-config\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734811 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01678d74-64f0-4bee-b900-6dd92b577842-trusted-ca\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734826 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.734859 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b36f0f60-82ce-49f5-b23b-2f585244e8db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-k8l6n\" (UID: \"b36f0f60-82ce-49f5-b23b-2f585244e8db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.735980 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3999e53-dcc7-4e16-8463-de30fb7fcbf6-proxy-tls\") pod \"machine-config-controller-84d6567774-r2cqm\" (UID: \"d3999e53-dcc7-4e16-8463-de30fb7fcbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.736001 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmms7\" (UniqueName: \"kubernetes.io/projected/99dd4351-6bbc-4e6e-b08f-88de7315987b-kube-api-access-fmms7\") pod \"olm-operator-6b444d44fb-bm78k\" (UID: \"99dd4351-6bbc-4e6e-b08f-88de7315987b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.736028 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.736102 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-audit-dir\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.736127 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zjdx\" (UniqueName: \"kubernetes.io/projected/8b29fa00-3205-4f7c-8f5f-671c7921029b-kube-api-access-4zjdx\") pod \"control-plane-machine-set-operator-78cbb6b69f-w8l8k\" (UID: \"8b29fa00-3205-4f7c-8f5f-671c7921029b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.736151 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-registry-tls\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.736165 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.737102 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f18077e0-f3d3-4327-a384-321e3d9b9f78-proxy-tls\") pod \"machine-config-operator-74547568cd-7bspm\" (UID: \"f18077e0-f3d3-4327-a384-321e3d9b9f78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.737162 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b36f0f60-82ce-49f5-b23b-2f585244e8db-trusted-ca\") pod \"ingress-operator-5b745b69d9-k8l6n\" (UID: \"b36f0f60-82ce-49f5-b23b-2f585244e8db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.737235 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b36f0f60-82ce-49f5-b23b-2f585244e8db-metrics-tls\") pod \"ingress-operator-5b745b69d9-k8l6n\" (UID: \"b36f0f60-82ce-49f5-b23b-2f585244e8db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.737799 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee7591a8-5999-4c4f-a0c8-de4fc90ac59d-config\") pod \"machine-approver-56656f9798-g6nz6\" (UID: \"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.737819 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.737835 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/facbc367-fb0c-466a-a01e-8f1118db6fcb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n6lq9\" (UID: \"facbc367-fb0c-466a-a01e-8f1118db6fcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.738242 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84gx4\" (UniqueName: \"kubernetes.io/projected/a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f-kube-api-access-84gx4\") pod \"multus-admission-controller-857f4d67dd-bftgd\" (UID: \"a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bftgd" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.738297 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a44f7c6d-d509-4a18-9fd9-47419d54af4b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6stnj\" (UID: \"a44f7c6d-d509-4a18-9fd9-47419d54af4b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.738313 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jjs\" (UniqueName: \"kubernetes.io/projected/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-kube-api-access-45jjs\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.738386 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f18077e0-f3d3-4327-a384-321e3d9b9f78-images\") pod \"machine-config-operator-74547568cd-7bspm\" (UID: \"f18077e0-f3d3-4327-a384-321e3d9b9f78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.740537 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e4ff33b-3159-4449-87fa-36031498cfbc-trusted-ca\") pod \"console-operator-58897d9998-xq8v7\" (UID: \"8e4ff33b-3159-4449-87fa-36031498cfbc\") " pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.740866 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.741081 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01678d74-64f0-4bee-b900-6dd92b577842-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.741156 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfl26\" (UniqueName: \"kubernetes.io/projected/8e4ff33b-3159-4449-87fa-36031498cfbc-kube-api-access-rfl26\") pod \"console-operator-58897d9998-xq8v7\" (UID: \"8e4ff33b-3159-4449-87fa-36031498cfbc\") " pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.741738 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ac9e8a-e68c-4d22-8841-3f36863b0574-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mlfhh\" (UID: \"f1ac9e8a-e68c-4d22-8841-3f36863b0574\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.742055 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.742098 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/99dd4351-6bbc-4e6e-b08f-88de7315987b-srv-cert\") pod \"olm-operator-6b444d44fb-bm78k\" (UID: \"99dd4351-6bbc-4e6e-b08f-88de7315987b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.742142 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3999e53-dcc7-4e16-8463-de30fb7fcbf6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r2cqm\" (UID: \"d3999e53-dcc7-4e16-8463-de30fb7fcbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.742164 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-serving-cert\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.742235 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f18077e0-f3d3-4327-a384-321e3d9b9f78-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7bspm\" (UID: \"f18077e0-f3d3-4327-a384-321e3d9b9f78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.743180 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-trusted-ca-bundle\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.744413 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e4ff33b-3159-4449-87fa-36031498cfbc-serving-cert\") pod \"console-operator-58897d9998-xq8v7\" (UID: \"8e4ff33b-3159-4449-87fa-36031498cfbc\") " pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.744455 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/99dd4351-6bbc-4e6e-b08f-88de7315987b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bm78k\" (UID: \"99dd4351-6bbc-4e6e-b08f-88de7315987b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.758654 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.765655 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.791248 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.812777 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.816520 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn"] Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.845879 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846023 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-audit-dir\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846053 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-mountpoint-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846073 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zjdx\" (UniqueName: \"kubernetes.io/projected/8b29fa00-3205-4f7c-8f5f-671c7921029b-kube-api-access-4zjdx\") pod \"control-plane-machine-set-operator-78cbb6b69f-w8l8k\" (UID: \"8b29fa00-3205-4f7c-8f5f-671c7921029b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846096 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbcxr\" (UniqueName: \"kubernetes.io/projected/1324dfe7-848a-4347-b1ce-6e80f0c20d0c-kube-api-access-dbcxr\") pod \"service-ca-operator-777779d784-wh4vl\" (UID: \"1324dfe7-848a-4347-b1ce-6e80f0c20d0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846114 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-registry-tls\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846131 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846146 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f18077e0-f3d3-4327-a384-321e3d9b9f78-proxy-tls\") pod \"machine-config-operator-74547568cd-7bspm\" (UID: \"f18077e0-f3d3-4327-a384-321e3d9b9f78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846163 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b36f0f60-82ce-49f5-b23b-2f585244e8db-trusted-ca\") pod \"ingress-operator-5b745b69d9-k8l6n\" (UID: \"b36f0f60-82ce-49f5-b23b-2f585244e8db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846178 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a84795d5-4c6b-49d8-b2a8-fe8a086bcf52-apiservice-cert\") pod \"packageserver-d55dfcdfc-j7mqc\" (UID: \"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846193 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-csi-data-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846209 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b36f0f60-82ce-49f5-b23b-2f585244e8db-metrics-tls\") pod \"ingress-operator-5b745b69d9-k8l6n\" (UID: \"b36f0f60-82ce-49f5-b23b-2f585244e8db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846237 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1324dfe7-848a-4347-b1ce-6e80f0c20d0c-serving-cert\") pod \"service-ca-operator-777779d784-wh4vl\" (UID: \"1324dfe7-848a-4347-b1ce-6e80f0c20d0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846255 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee7591a8-5999-4c4f-a0c8-de4fc90ac59d-config\") pod \"machine-approver-56656f9798-g6nz6\" (UID: \"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846271 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: E1128 11:08:55.846294 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:56.346274178 +0000 UTC m=+134.669517395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846333 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/facbc367-fb0c-466a-a01e-8f1118db6fcb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n6lq9\" (UID: \"facbc367-fb0c-466a-a01e-8f1118db6fcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846382 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c687f311-bc59-4115-8bd9-46f90c85136f-certs\") pod \"machine-config-server-bqx72\" (UID: \"c687f311-bc59-4115-8bd9-46f90c85136f\") " pod="openshift-machine-config-operator/machine-config-server-bqx72" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846416 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84gx4\" (UniqueName: \"kubernetes.io/projected/a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f-kube-api-access-84gx4\") pod \"multus-admission-controller-857f4d67dd-bftgd\" (UID: \"a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bftgd" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846463 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a44f7c6d-d509-4a18-9fd9-47419d54af4b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6stnj\" (UID: \"a44f7c6d-d509-4a18-9fd9-47419d54af4b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846485 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jjs\" (UniqueName: \"kubernetes.io/projected/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-kube-api-access-45jjs\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846510 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8903d4eb-b874-4abf-86b6-e44e7bd951b0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fs2tk\" (UID: \"8903d4eb-b874-4abf-86b6-e44e7bd951b0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846558 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f18077e0-f3d3-4327-a384-321e3d9b9f78-images\") pod \"machine-config-operator-74547568cd-7bspm\" (UID: \"f18077e0-f3d3-4327-a384-321e3d9b9f78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846581 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e4ff33b-3159-4449-87fa-36031498cfbc-trusted-ca\") pod \"console-operator-58897d9998-xq8v7\" (UID: \"8e4ff33b-3159-4449-87fa-36031498cfbc\") " pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846622 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846716 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-registration-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846752 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01678d74-64f0-4bee-b900-6dd92b577842-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846774 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfl26\" (UniqueName: \"kubernetes.io/projected/8e4ff33b-3159-4449-87fa-36031498cfbc-kube-api-access-rfl26\") pod \"console-operator-58897d9998-xq8v7\" (UID: \"8e4ff33b-3159-4449-87fa-36031498cfbc\") " pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846799 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24396734-e237-4fb3-9cae-8c08db3a9122-config-volume\") pod \"collect-profiles-29405460-hh2nf\" (UID: \"24396734-e237-4fb3-9cae-8c08db3a9122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846830 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ac9e8a-e68c-4d22-8841-3f36863b0574-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mlfhh\" (UID: \"f1ac9e8a-e68c-4d22-8841-3f36863b0574\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846867 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpqhx\" (UniqueName: \"kubernetes.io/projected/e08207c7-a494-42f9-b59a-946ca0134a24-kube-api-access-qpqhx\") pod \"dns-operator-744455d44c-n9b9t\" (UID: \"e08207c7-a494-42f9-b59a-946ca0134a24\") " pod="openshift-dns-operator/dns-operator-744455d44c-n9b9t" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846879 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846897 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/99dd4351-6bbc-4e6e-b08f-88de7315987b-srv-cert\") pod \"olm-operator-6b444d44fb-bm78k\" (UID: \"99dd4351-6bbc-4e6e-b08f-88de7315987b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846927 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846949 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f18077e0-f3d3-4327-a384-321e3d9b9f78-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7bspm\" (UID: \"f18077e0-f3d3-4327-a384-321e3d9b9f78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846975 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3999e53-dcc7-4e16-8463-de30fb7fcbf6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r2cqm\" (UID: \"d3999e53-dcc7-4e16-8463-de30fb7fcbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.846996 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-serving-cert\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847043 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-trusted-ca-bundle\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847071 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e4ff33b-3159-4449-87fa-36031498cfbc-serving-cert\") pod \"console-operator-58897d9998-xq8v7\" (UID: \"8e4ff33b-3159-4449-87fa-36031498cfbc\") " pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847092 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5zgg\" (UniqueName: \"kubernetes.io/projected/a84795d5-4c6b-49d8-b2a8-fe8a086bcf52-kube-api-access-z5zgg\") pod \"packageserver-d55dfcdfc-j7mqc\" (UID: \"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847117 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wmgp\" (UniqueName: \"kubernetes.io/projected/a5b41370-17d7-4d6d-b93b-7d10b6403e53-kube-api-access-6wmgp\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847145 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/99dd4351-6bbc-4e6e-b08f-88de7315987b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bm78k\" (UID: \"99dd4351-6bbc-4e6e-b08f-88de7315987b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847167 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facbc367-fb0c-466a-a01e-8f1118db6fcb-config\") pod \"kube-apiserver-operator-766d6c64bb-n6lq9\" (UID: \"facbc367-fb0c-466a-a01e-8f1118db6fcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847195 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-bound-sa-token\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847214 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zn47\" (UniqueName: \"kubernetes.io/projected/5514e80d-3def-4db9-90cc-67918bfa211a-kube-api-access-6zn47\") pod \"dns-default-prq4h\" (UID: \"5514e80d-3def-4db9-90cc-67918bfa211a\") " pod="openshift-dns/dns-default-prq4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847230 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bdb050b4-13af-40be-903a-2c5b1e233d7b-signing-cabundle\") pod \"service-ca-9c57cc56f-tfv7p\" (UID: \"bdb050b4-13af-40be-903a-2c5b1e233d7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847251 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01678d74-64f0-4bee-b900-6dd92b577842-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847266 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-oauth-serving-cert\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847284 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z7ck\" (UniqueName: \"kubernetes.io/projected/c687f311-bc59-4115-8bd9-46f90c85136f-kube-api-access-5z7ck\") pod \"machine-config-server-bqx72\" (UID: \"c687f311-bc59-4115-8bd9-46f90c85136f\") " pod="openshift-machine-config-operator/machine-config-server-bqx72" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847352 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghl8w\" (UniqueName: \"kubernetes.io/projected/048a22d4-da9f-4fc6-837a-c5398965a0f0-kube-api-access-ghl8w\") pod \"migrator-59844c95c7-2dg24\" (UID: \"048a22d4-da9f-4fc6-837a-c5398965a0f0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dg24" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847506 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847526 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbtcd\" (UniqueName: \"kubernetes.io/projected/a44f7c6d-d509-4a18-9fd9-47419d54af4b-kube-api-access-xbtcd\") pod \"cluster-image-registry-operator-dc59b4c8b-6stnj\" (UID: \"a44f7c6d-d509-4a18-9fd9-47419d54af4b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847542 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8hrc\" (UniqueName: \"kubernetes.io/projected/f18077e0-f3d3-4327-a384-321e3d9b9f78-kube-api-access-x8hrc\") pod \"machine-config-operator-74547568cd-7bspm\" (UID: \"f18077e0-f3d3-4327-a384-321e3d9b9f78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847564 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psmlt\" (UniqueName: \"kubernetes.io/projected/ee7591a8-5999-4c4f-a0c8-de4fc90ac59d-kube-api-access-psmlt\") pod \"machine-approver-56656f9798-g6nz6\" (UID: \"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847585 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cngz\" (UniqueName: \"kubernetes.io/projected/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-kube-api-access-6cngz\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847605 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a44f7c6d-d509-4a18-9fd9-47419d54af4b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6stnj\" (UID: \"a44f7c6d-d509-4a18-9fd9-47419d54af4b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847639 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6310d4f4-cea5-4b44-a388-193d13bc5ec7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dttbx\" (UID: \"6310d4f4-cea5-4b44-a388-193d13bc5ec7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847661 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l92fb\" (UniqueName: \"kubernetes.io/projected/b36f0f60-82ce-49f5-b23b-2f585244e8db-kube-api-access-l92fb\") pod \"ingress-operator-5b745b69d9-k8l6n\" (UID: \"b36f0f60-82ce-49f5-b23b-2f585244e8db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847678 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bftgd\" (UID: \"a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bftgd" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847696 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/502e7773-b2fc-46ea-97be-bd253f318bcd-cert\") pod \"ingress-canary-tbqqf\" (UID: \"502e7773-b2fc-46ea-97be-bd253f318bcd\") " pod="openshift-ingress-canary/ingress-canary-tbqqf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847712 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847740 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-audit-policies\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847765 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b29fa00-3205-4f7c-8f5f-671c7921029b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w8l8k\" (UID: \"8b29fa00-3205-4f7c-8f5f-671c7921029b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847790 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bdb050b4-13af-40be-903a-2c5b1e233d7b-signing-key\") pod \"service-ca-9c57cc56f-tfv7p\" (UID: \"bdb050b4-13af-40be-903a-2c5b1e233d7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847813 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pj6t\" (UniqueName: \"kubernetes.io/projected/d3999e53-dcc7-4e16-8463-de30fb7fcbf6-kube-api-access-2pj6t\") pod \"machine-config-controller-84d6567774-r2cqm\" (UID: \"d3999e53-dcc7-4e16-8463-de30fb7fcbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847835 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6310d4f4-cea5-4b44-a388-193d13bc5ec7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dttbx\" (UID: \"6310d4f4-cea5-4b44-a388-193d13bc5ec7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847860 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbgx\" (UniqueName: \"kubernetes.io/projected/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-kube-api-access-qfbgx\") pod \"marketplace-operator-79b997595-rnhdf\" (UID: \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\") " pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847884 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847901 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/facbc367-fb0c-466a-a01e-8f1118db6fcb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n6lq9\" (UID: \"facbc367-fb0c-466a-a01e-8f1118db6fcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847925 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01678d74-64f0-4bee-b900-6dd92b577842-registry-certificates\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847941 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.847975 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5514e80d-3def-4db9-90cc-67918bfa211a-config-volume\") pod \"dns-default-prq4h\" (UID: \"5514e80d-3def-4db9-90cc-67918bfa211a\") " pod="openshift-dns/dns-default-prq4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848005 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-service-ca\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848020 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6310d4f4-cea5-4b44-a388-193d13bc5ec7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dttbx\" (UID: \"6310d4f4-cea5-4b44-a388-193d13bc5ec7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848036 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1324dfe7-848a-4347-b1ce-6e80f0c20d0c-config\") pod \"service-ca-operator-777779d784-wh4vl\" (UID: \"1324dfe7-848a-4347-b1ce-6e80f0c20d0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848051 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e08207c7-a494-42f9-b59a-946ca0134a24-metrics-tls\") pod \"dns-operator-744455d44c-n9b9t\" (UID: \"e08207c7-a494-42f9-b59a-946ca0134a24\") " pod="openshift-dns-operator/dns-operator-744455d44c-n9b9t" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848070 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzgk4\" (UniqueName: \"kubernetes.io/projected/bdb050b4-13af-40be-903a-2c5b1e233d7b-kube-api-access-hzgk4\") pod \"service-ca-9c57cc56f-tfv7p\" (UID: \"bdb050b4-13af-40be-903a-2c5b1e233d7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848086 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e4ff33b-3159-4449-87fa-36031498cfbc-config\") pod \"console-operator-58897d9998-xq8v7\" (UID: \"8e4ff33b-3159-4449-87fa-36031498cfbc\") " pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848101 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8903d4eb-b874-4abf-86b6-e44e7bd951b0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fs2tk\" (UID: \"8903d4eb-b874-4abf-86b6-e44e7bd951b0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848118 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7f9f\" (UniqueName: \"kubernetes.io/projected/9078e378-b549-4e5e-82ee-b9a96cf8e4da-kube-api-access-d7f9f\") pod \"downloads-7954f5f757-wwxmx\" (UID: \"9078e378-b549-4e5e-82ee-b9a96cf8e4da\") " pod="openshift-console/downloads-7954f5f757-wwxmx" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848143 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5tbn\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-kube-api-access-p5tbn\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848159 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848176 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c687f311-bc59-4115-8bd9-46f90c85136f-node-bootstrap-token\") pod \"machine-config-server-bqx72\" (UID: \"c687f311-bc59-4115-8bd9-46f90c85136f\") " pod="openshift-machine-config-operator/machine-config-server-bqx72" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848192 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rnhdf\" (UID: \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\") " pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848214 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-config\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848235 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rnhdf\" (UID: \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\") " pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848253 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a84795d5-4c6b-49d8-b2a8-fe8a086bcf52-webhook-cert\") pod \"packageserver-d55dfcdfc-j7mqc\" (UID: \"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848276 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-plugins-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848293 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24396734-e237-4fb3-9cae-8c08db3a9122-secret-volume\") pod \"collect-profiles-29405460-hh2nf\" (UID: \"24396734-e237-4fb3-9cae-8c08db3a9122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848307 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shp67\" (UniqueName: \"kubernetes.io/projected/24396734-e237-4fb3-9cae-8c08db3a9122-kube-api-access-shp67\") pod \"collect-profiles-29405460-hh2nf\" (UID: \"24396734-e237-4fb3-9cae-8c08db3a9122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848334 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h925x\" (UniqueName: \"kubernetes.io/projected/8903d4eb-b874-4abf-86b6-e44e7bd951b0-kube-api-access-h925x\") pod \"openshift-apiserver-operator-796bbdcf4f-fs2tk\" (UID: \"8903d4eb-b874-4abf-86b6-e44e7bd951b0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848408 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a44f7c6d-d509-4a18-9fd9-47419d54af4b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6stnj\" (UID: \"a44f7c6d-d509-4a18-9fd9-47419d54af4b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848430 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fdd9\" (UniqueName: \"kubernetes.io/projected/f1ac9e8a-e68c-4d22-8841-3f36863b0574-kube-api-access-5fdd9\") pod \"package-server-manager-789f6589d5-mlfhh\" (UID: \"f1ac9e8a-e68c-4d22-8841-3f36863b0574\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848447 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee7591a8-5999-4c4f-a0c8-de4fc90ac59d-auth-proxy-config\") pod \"machine-approver-56656f9798-g6nz6\" (UID: \"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848463 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ee7591a8-5999-4c4f-a0c8-de4fc90ac59d-machine-approver-tls\") pod \"machine-approver-56656f9798-g6nz6\" (UID: \"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848479 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848497 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-oauth-config\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848514 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twt8h\" (UniqueName: \"kubernetes.io/projected/502e7773-b2fc-46ea-97be-bd253f318bcd-kube-api-access-twt8h\") pod \"ingress-canary-tbqqf\" (UID: \"502e7773-b2fc-46ea-97be-bd253f318bcd\") " pod="openshift-ingress-canary/ingress-canary-tbqqf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848531 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-socket-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848548 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848572 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5514e80d-3def-4db9-90cc-67918bfa211a-metrics-tls\") pod \"dns-default-prq4h\" (UID: \"5514e80d-3def-4db9-90cc-67918bfa211a\") " pod="openshift-dns/dns-default-prq4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848591 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01678d74-64f0-4bee-b900-6dd92b577842-trusted-ca\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848620 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b36f0f60-82ce-49f5-b23b-2f585244e8db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-k8l6n\" (UID: \"b36f0f60-82ce-49f5-b23b-2f585244e8db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848636 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a84795d5-4c6b-49d8-b2a8-fe8a086bcf52-tmpfs\") pod \"packageserver-d55dfcdfc-j7mqc\" (UID: \"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848653 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3999e53-dcc7-4e16-8463-de30fb7fcbf6-proxy-tls\") pod \"machine-config-controller-84d6567774-r2cqm\" (UID: \"d3999e53-dcc7-4e16-8463-de30fb7fcbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848669 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmms7\" (UniqueName: \"kubernetes.io/projected/99dd4351-6bbc-4e6e-b08f-88de7315987b-kube-api-access-fmms7\") pod \"olm-operator-6b444d44fb-bm78k\" (UID: \"99dd4351-6bbc-4e6e-b08f-88de7315987b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.848712 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.849803 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-audit-dir\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.850756 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b36f0f60-82ce-49f5-b23b-2f585244e8db-trusted-ca\") pod \"ingress-operator-5b745b69d9-k8l6n\" (UID: \"b36f0f60-82ce-49f5-b23b-2f585244e8db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.851947 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-config\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.852282 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.852289 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-service-ca\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.852485 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee7591a8-5999-4c4f-a0c8-de4fc90ac59d-config\") pod \"machine-approver-56656f9798-g6nz6\" (UID: \"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:55 crc kubenswrapper[4772]: E1128 11:08:55.852522 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:56.352504015 +0000 UTC m=+134.675747242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.853001 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.853298 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6310d4f4-cea5-4b44-a388-193d13bc5ec7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dttbx\" (UID: \"6310d4f4-cea5-4b44-a388-193d13bc5ec7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.853837 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3999e53-dcc7-4e16-8463-de30fb7fcbf6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r2cqm\" (UID: \"d3999e53-dcc7-4e16-8463-de30fb7fcbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.853972 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01678d74-64f0-4bee-b900-6dd92b577842-registry-certificates\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.854169 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-oauth-serving-cert\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.854510 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-trusted-ca-bundle\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.854810 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-audit-policies\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.855062 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f18077e0-f3d3-4327-a384-321e3d9b9f78-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7bspm\" (UID: \"f18077e0-f3d3-4327-a384-321e3d9b9f78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.855131 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e4ff33b-3159-4449-87fa-36031498cfbc-trusted-ca\") pod \"console-operator-58897d9998-xq8v7\" (UID: \"8e4ff33b-3159-4449-87fa-36031498cfbc\") " pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.855231 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f18077e0-f3d3-4327-a384-321e3d9b9f78-images\") pod \"machine-config-operator-74547568cd-7bspm\" (UID: \"f18077e0-f3d3-4327-a384-321e3d9b9f78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.855441 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee7591a8-5999-4c4f-a0c8-de4fc90ac59d-auth-proxy-config\") pod \"machine-approver-56656f9798-g6nz6\" (UID: \"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.855634 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01678d74-64f0-4bee-b900-6dd92b577842-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.855862 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a44f7c6d-d509-4a18-9fd9-47419d54af4b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6stnj\" (UID: \"a44f7c6d-d509-4a18-9fd9-47419d54af4b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.856191 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b36f0f60-82ce-49f5-b23b-2f585244e8db-metrics-tls\") pod \"ingress-operator-5b745b69d9-k8l6n\" (UID: \"b36f0f60-82ce-49f5-b23b-2f585244e8db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.856055 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-registry-tls\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.856412 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facbc367-fb0c-466a-a01e-8f1118db6fcb-config\") pod \"kube-apiserver-operator-766d6c64bb-n6lq9\" (UID: \"facbc367-fb0c-466a-a01e-8f1118db6fcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.856888 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01678d74-64f0-4bee-b900-6dd92b577842-trusted-ca\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.856906 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/facbc367-fb0c-466a-a01e-8f1118db6fcb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n6lq9\" (UID: \"facbc367-fb0c-466a-a01e-8f1118db6fcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.857342 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e4ff33b-3159-4449-87fa-36031498cfbc-config\") pod \"console-operator-58897d9998-xq8v7\" (UID: \"8e4ff33b-3159-4449-87fa-36031498cfbc\") " pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.857621 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f18077e0-f3d3-4327-a384-321e3d9b9f78-proxy-tls\") pod \"machine-config-operator-74547568cd-7bspm\" (UID: \"f18077e0-f3d3-4327-a384-321e3d9b9f78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.858290 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.858414 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.859159 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ac9e8a-e68c-4d22-8841-3f36863b0574-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mlfhh\" (UID: \"f1ac9e8a-e68c-4d22-8841-3f36863b0574\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.859373 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.859908 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bftgd\" (UID: \"a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bftgd" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.860323 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-serving-cert\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.860522 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.860560 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-oauth-config\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.860843 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e4ff33b-3159-4449-87fa-36031498cfbc-serving-cert\") pod \"console-operator-58897d9998-xq8v7\" (UID: \"8e4ff33b-3159-4449-87fa-36031498cfbc\") " pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.860910 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.860950 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/99dd4351-6bbc-4e6e-b08f-88de7315987b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bm78k\" (UID: \"99dd4351-6bbc-4e6e-b08f-88de7315987b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.861253 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.861696 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a44f7c6d-d509-4a18-9fd9-47419d54af4b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6stnj\" (UID: \"a44f7c6d-d509-4a18-9fd9-47419d54af4b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.861902 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3999e53-dcc7-4e16-8463-de30fb7fcbf6-proxy-tls\") pod \"machine-config-controller-84d6567774-r2cqm\" (UID: \"d3999e53-dcc7-4e16-8463-de30fb7fcbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.862263 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01678d74-64f0-4bee-b900-6dd92b577842-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.863014 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.867294 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6310d4f4-cea5-4b44-a388-193d13bc5ec7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dttbx\" (UID: \"6310d4f4-cea5-4b44-a388-193d13bc5ec7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.867507 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/99dd4351-6bbc-4e6e-b08f-88de7315987b-srv-cert\") pod \"olm-operator-6b444d44fb-bm78k\" (UID: \"99dd4351-6bbc-4e6e-b08f-88de7315987b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.868303 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.869018 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b29fa00-3205-4f7c-8f5f-671c7921029b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w8l8k\" (UID: \"8b29fa00-3205-4f7c-8f5f-671c7921029b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.875921 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ee7591a8-5999-4c4f-a0c8-de4fc90ac59d-machine-approver-tls\") pod \"machine-approver-56656f9798-g6nz6\" (UID: \"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.892104 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zjdx\" (UniqueName: \"kubernetes.io/projected/8b29fa00-3205-4f7c-8f5f-671c7921029b-kube-api-access-4zjdx\") pod \"control-plane-machine-set-operator-78cbb6b69f-w8l8k\" (UID: \"8b29fa00-3205-4f7c-8f5f-671c7921029b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.928833 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pj6t\" (UniqueName: \"kubernetes.io/projected/d3999e53-dcc7-4e16-8463-de30fb7fcbf6-kube-api-access-2pj6t\") pod \"machine-config-controller-84d6567774-r2cqm\" (UID: \"d3999e53-dcc7-4e16-8463-de30fb7fcbf6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.935839 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-bound-sa-token\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.950006 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:55 crc kubenswrapper[4772]: E1128 11:08:55.950172 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:56.450146697 +0000 UTC m=+134.773389924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.950229 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bdb050b4-13af-40be-903a-2c5b1e233d7b-signing-key\") pod \"service-ca-9c57cc56f-tfv7p\" (UID: \"bdb050b4-13af-40be-903a-2c5b1e233d7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.950288 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfbgx\" (UniqueName: \"kubernetes.io/projected/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-kube-api-access-qfbgx\") pod \"marketplace-operator-79b997595-rnhdf\" (UID: \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\") " pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.950327 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5514e80d-3def-4db9-90cc-67918bfa211a-config-volume\") pod \"dns-default-prq4h\" (UID: \"5514e80d-3def-4db9-90cc-67918bfa211a\") " pod="openshift-dns/dns-default-prq4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.950641 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1324dfe7-848a-4347-b1ce-6e80f0c20d0c-config\") pod \"service-ca-operator-777779d784-wh4vl\" (UID: \"1324dfe7-848a-4347-b1ce-6e80f0c20d0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.950662 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e08207c7-a494-42f9-b59a-946ca0134a24-metrics-tls\") pod \"dns-operator-744455d44c-n9b9t\" (UID: \"e08207c7-a494-42f9-b59a-946ca0134a24\") " pod="openshift-dns-operator/dns-operator-744455d44c-n9b9t" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.950710 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzgk4\" (UniqueName: \"kubernetes.io/projected/bdb050b4-13af-40be-903a-2c5b1e233d7b-kube-api-access-hzgk4\") pod \"service-ca-9c57cc56f-tfv7p\" (UID: \"bdb050b4-13af-40be-903a-2c5b1e233d7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.950729 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8903d4eb-b874-4abf-86b6-e44e7bd951b0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fs2tk\" (UID: \"8903d4eb-b874-4abf-86b6-e44e7bd951b0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951185 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c687f311-bc59-4115-8bd9-46f90c85136f-node-bootstrap-token\") pod \"machine-config-server-bqx72\" (UID: \"c687f311-bc59-4115-8bd9-46f90c85136f\") " pod="openshift-machine-config-operator/machine-config-server-bqx72" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951226 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rnhdf\" (UID: \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\") " pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951253 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rnhdf\" (UID: \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\") " pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951271 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a84795d5-4c6b-49d8-b2a8-fe8a086bcf52-webhook-cert\") pod \"packageserver-d55dfcdfc-j7mqc\" (UID: \"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951280 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft"] Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951307 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-plugins-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951285 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1324dfe7-848a-4347-b1ce-6e80f0c20d0c-config\") pod \"service-ca-operator-777779d784-wh4vl\" (UID: \"1324dfe7-848a-4347-b1ce-6e80f0c20d0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951346 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h925x\" (UniqueName: \"kubernetes.io/projected/8903d4eb-b874-4abf-86b6-e44e7bd951b0-kube-api-access-h925x\") pod \"openshift-apiserver-operator-796bbdcf4f-fs2tk\" (UID: \"8903d4eb-b874-4abf-86b6-e44e7bd951b0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951416 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24396734-e237-4fb3-9cae-8c08db3a9122-secret-volume\") pod \"collect-profiles-29405460-hh2nf\" (UID: \"24396734-e237-4fb3-9cae-8c08db3a9122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951443 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shp67\" (UniqueName: \"kubernetes.io/projected/24396734-e237-4fb3-9cae-8c08db3a9122-kube-api-access-shp67\") pod \"collect-profiles-29405460-hh2nf\" (UID: \"24396734-e237-4fb3-9cae-8c08db3a9122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951482 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twt8h\" (UniqueName: \"kubernetes.io/projected/502e7773-b2fc-46ea-97be-bd253f318bcd-kube-api-access-twt8h\") pod \"ingress-canary-tbqqf\" (UID: \"502e7773-b2fc-46ea-97be-bd253f318bcd\") " pod="openshift-ingress-canary/ingress-canary-tbqqf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951544 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-socket-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951562 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5514e80d-3def-4db9-90cc-67918bfa211a-metrics-tls\") pod \"dns-default-prq4h\" (UID: \"5514e80d-3def-4db9-90cc-67918bfa211a\") " pod="openshift-dns/dns-default-prq4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951588 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a84795d5-4c6b-49d8-b2a8-fe8a086bcf52-tmpfs\") pod \"packageserver-d55dfcdfc-j7mqc\" (UID: \"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951627 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-mountpoint-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951659 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbcxr\" (UniqueName: \"kubernetes.io/projected/1324dfe7-848a-4347-b1ce-6e80f0c20d0c-kube-api-access-dbcxr\") pod \"service-ca-operator-777779d784-wh4vl\" (UID: \"1324dfe7-848a-4347-b1ce-6e80f0c20d0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951681 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a84795d5-4c6b-49d8-b2a8-fe8a086bcf52-apiservice-cert\") pod \"packageserver-d55dfcdfc-j7mqc\" (UID: \"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951696 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-csi-data-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951715 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1324dfe7-848a-4347-b1ce-6e80f0c20d0c-serving-cert\") pod \"service-ca-operator-777779d784-wh4vl\" (UID: \"1324dfe7-848a-4347-b1ce-6e80f0c20d0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951742 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c687f311-bc59-4115-8bd9-46f90c85136f-certs\") pod \"machine-config-server-bqx72\" (UID: \"c687f311-bc59-4115-8bd9-46f90c85136f\") " pod="openshift-machine-config-operator/machine-config-server-bqx72" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951765 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8903d4eb-b874-4abf-86b6-e44e7bd951b0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fs2tk\" (UID: \"8903d4eb-b874-4abf-86b6-e44e7bd951b0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951790 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-registration-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951814 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24396734-e237-4fb3-9cae-8c08db3a9122-config-volume\") pod \"collect-profiles-29405460-hh2nf\" (UID: \"24396734-e237-4fb3-9cae-8c08db3a9122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951806 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-plugins-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951834 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpqhx\" (UniqueName: \"kubernetes.io/projected/e08207c7-a494-42f9-b59a-946ca0134a24-kube-api-access-qpqhx\") pod \"dns-operator-744455d44c-n9b9t\" (UID: \"e08207c7-a494-42f9-b59a-946ca0134a24\") " pod="openshift-dns-operator/dns-operator-744455d44c-n9b9t" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951864 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5zgg\" (UniqueName: \"kubernetes.io/projected/a84795d5-4c6b-49d8-b2a8-fe8a086bcf52-kube-api-access-z5zgg\") pod \"packageserver-d55dfcdfc-j7mqc\" (UID: \"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951880 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wmgp\" (UniqueName: \"kubernetes.io/projected/a5b41370-17d7-4d6d-b93b-7d10b6403e53-kube-api-access-6wmgp\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951898 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zn47\" (UniqueName: \"kubernetes.io/projected/5514e80d-3def-4db9-90cc-67918bfa211a-kube-api-access-6zn47\") pod \"dns-default-prq4h\" (UID: \"5514e80d-3def-4db9-90cc-67918bfa211a\") " pod="openshift-dns/dns-default-prq4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951912 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bdb050b4-13af-40be-903a-2c5b1e233d7b-signing-cabundle\") pod \"service-ca-9c57cc56f-tfv7p\" (UID: \"bdb050b4-13af-40be-903a-2c5b1e233d7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951933 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z7ck\" (UniqueName: \"kubernetes.io/projected/c687f311-bc59-4115-8bd9-46f90c85136f-kube-api-access-5z7ck\") pod \"machine-config-server-bqx72\" (UID: \"c687f311-bc59-4115-8bd9-46f90c85136f\") " pod="openshift-machine-config-operator/machine-config-server-bqx72" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.951969 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.952018 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/502e7773-b2fc-46ea-97be-bd253f318bcd-cert\") pod \"ingress-canary-tbqqf\" (UID: \"502e7773-b2fc-46ea-97be-bd253f318bcd\") " pod="openshift-ingress-canary/ingress-canary-tbqqf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.953449 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rnhdf\" (UID: \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\") " pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.953726 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rnhdf\" (UID: \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\") " pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.953888 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-registration-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.954025 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8903d4eb-b874-4abf-86b6-e44e7bd951b0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fs2tk\" (UID: \"8903d4eb-b874-4abf-86b6-e44e7bd951b0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.954058 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24396734-e237-4fb3-9cae-8c08db3a9122-secret-volume\") pod \"collect-profiles-29405460-hh2nf\" (UID: \"24396734-e237-4fb3-9cae-8c08db3a9122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.954211 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-mountpoint-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.954277 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-csi-data-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.954338 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5b41370-17d7-4d6d-b93b-7d10b6403e53-socket-dir\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:55 crc kubenswrapper[4772]: E1128 11:08:55.954529 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:56.454517865 +0000 UTC m=+134.777761092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.955721 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c687f311-bc59-4115-8bd9-46f90c85136f-node-bootstrap-token\") pod \"machine-config-server-bqx72\" (UID: \"c687f311-bc59-4115-8bd9-46f90c85136f\") " pod="openshift-machine-config-operator/machine-config-server-bqx72" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.956491 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bdb050b4-13af-40be-903a-2c5b1e233d7b-signing-key\") pod \"service-ca-9c57cc56f-tfv7p\" (UID: \"bdb050b4-13af-40be-903a-2c5b1e233d7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.957277 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1324dfe7-848a-4347-b1ce-6e80f0c20d0c-serving-cert\") pod \"service-ca-operator-777779d784-wh4vl\" (UID: \"1324dfe7-848a-4347-b1ce-6e80f0c20d0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.960172 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8903d4eb-b874-4abf-86b6-e44e7bd951b0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fs2tk\" (UID: \"8903d4eb-b874-4abf-86b6-e44e7bd951b0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.961907 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e08207c7-a494-42f9-b59a-946ca0134a24-metrics-tls\") pod \"dns-operator-744455d44c-n9b9t\" (UID: \"e08207c7-a494-42f9-b59a-946ca0134a24\") " pod="openshift-dns-operator/dns-operator-744455d44c-n9b9t" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.962569 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/502e7773-b2fc-46ea-97be-bd253f318bcd-cert\") pod \"ingress-canary-tbqqf\" (UID: \"502e7773-b2fc-46ea-97be-bd253f318bcd\") " pod="openshift-ingress-canary/ingress-canary-tbqqf" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.962735 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a84795d5-4c6b-49d8-b2a8-fe8a086bcf52-apiservice-cert\") pod \"packageserver-d55dfcdfc-j7mqc\" (UID: \"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.967985 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c687f311-bc59-4115-8bd9-46f90c85136f-certs\") pod \"machine-config-server-bqx72\" (UID: \"c687f311-bc59-4115-8bd9-46f90c85136f\") " pod="openshift-machine-config-operator/machine-config-server-bqx72" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.968448 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a84795d5-4c6b-49d8-b2a8-fe8a086bcf52-webhook-cert\") pod \"packageserver-d55dfcdfc-j7mqc\" (UID: \"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.968643 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84gx4\" (UniqueName: \"kubernetes.io/projected/a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f-kube-api-access-84gx4\") pod \"multus-admission-controller-857f4d67dd-bftgd\" (UID: \"a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bftgd" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.971033 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bdb050b4-13af-40be-903a-2c5b1e233d7b-signing-cabundle\") pod \"service-ca-9c57cc56f-tfv7p\" (UID: \"bdb050b4-13af-40be-903a-2c5b1e233d7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" Nov 28 11:08:55 crc kubenswrapper[4772]: I1128 11:08:55.990083 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/facbc367-fb0c-466a-a01e-8f1118db6fcb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n6lq9\" (UID: \"facbc367-fb0c-466a-a01e-8f1118db6fcb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.013224 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbtcd\" (UniqueName: \"kubernetes.io/projected/a44f7c6d-d509-4a18-9fd9-47419d54af4b-kube-api-access-xbtcd\") pod \"cluster-image-registry-operator-dc59b4c8b-6stnj\" (UID: \"a44f7c6d-d509-4a18-9fd9-47419d54af4b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.030298 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7f9f\" (UniqueName: \"kubernetes.io/projected/9078e378-b549-4e5e-82ee-b9a96cf8e4da-kube-api-access-d7f9f\") pod \"downloads-7954f5f757-wwxmx\" (UID: \"9078e378-b549-4e5e-82ee-b9a96cf8e4da\") " pod="openshift-console/downloads-7954f5f757-wwxmx" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.053692 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:56 crc kubenswrapper[4772]: E1128 11:08:56.054228 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:56.554214402 +0000 UTC m=+134.877457629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.057643 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b36f0f60-82ce-49f5-b23b-2f585244e8db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-k8l6n\" (UID: \"b36f0f60-82ce-49f5-b23b-2f585244e8db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.066476 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4dx8r"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.067052 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xntkm"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.072451 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4ct44"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.074304 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.074684 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5tbn\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-kube-api-access-p5tbn\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.089768 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8hrc\" (UniqueName: \"kubernetes.io/projected/f18077e0-f3d3-4327-a384-321e3d9b9f78-kube-api-access-x8hrc\") pod \"machine-config-operator-74547568cd-7bspm\" (UID: \"f18077e0-f3d3-4327-a384-321e3d9b9f78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.109300 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmms7\" (UniqueName: \"kubernetes.io/projected/99dd4351-6bbc-4e6e-b08f-88de7315987b-kube-api-access-fmms7\") pod \"olm-operator-6b444d44fb-bm78k\" (UID: \"99dd4351-6bbc-4e6e-b08f-88de7315987b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.121845 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.130616 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6310d4f4-cea5-4b44-a388-193d13bc5ec7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dttbx\" (UID: \"6310d4f4-cea5-4b44-a388-193d13bc5ec7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.137731 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.147784 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fdd9\" (UniqueName: \"kubernetes.io/projected/f1ac9e8a-e68c-4d22-8841-3f36863b0574-kube-api-access-5fdd9\") pod \"package-server-manager-789f6589d5-mlfhh\" (UID: \"f1ac9e8a-e68c-4d22-8841-3f36863b0574\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.150584 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.155433 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:56 crc kubenswrapper[4772]: E1128 11:08:56.155722 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:56.655708355 +0000 UTC m=+134.978951582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.167375 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24396734-e237-4fb3-9cae-8c08db3a9122-config-volume\") pod \"collect-profiles-29405460-hh2nf\" (UID: \"24396734-e237-4fb3-9cae-8c08db3a9122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.167754 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5514e80d-3def-4db9-90cc-67918bfa211a-config-volume\") pod \"dns-default-prq4h\" (UID: \"5514e80d-3def-4db9-90cc-67918bfa211a\") " pod="openshift-dns/dns-default-prq4h" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.168976 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghl8w\" (UniqueName: \"kubernetes.io/projected/048a22d4-da9f-4fc6-837a-c5398965a0f0-kube-api-access-ghl8w\") pod \"migrator-59844c95c7-2dg24\" (UID: \"048a22d4-da9f-4fc6-837a-c5398965a0f0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dg24" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.170099 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a84795d5-4c6b-49d8-b2a8-fe8a086bcf52-tmpfs\") pod \"packageserver-d55dfcdfc-j7mqc\" (UID: \"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.170545 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5514e80d-3def-4db9-90cc-67918bfa211a-metrics-tls\") pod \"dns-default-prq4h\" (UID: \"5514e80d-3def-4db9-90cc-67918bfa211a\") " pod="openshift-dns/dns-default-prq4h" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.172885 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a44f7c6d-d509-4a18-9fd9-47419d54af4b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6stnj\" (UID: \"a44f7c6d-d509-4a18-9fd9-47419d54af4b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.175711 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" Nov 28 11:08:56 crc kubenswrapper[4772]: W1128 11:08:56.177042 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda94fd25b_ca6a_4afe_922a_61ebfba248ed.slice/crio-77ced7fab3d5a538d86efbdc09b3e1ccfb3dca350d8f05396687b94a34261f0d WatchSource:0}: Error finding container 77ced7fab3d5a538d86efbdc09b3e1ccfb3dca350d8f05396687b94a34261f0d: Status 404 returned error can't find the container with id 77ced7fab3d5a538d86efbdc09b3e1ccfb3dca350d8f05396687b94a34261f0d Nov 28 11:08:56 crc kubenswrapper[4772]: W1128 11:08:56.177585 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8366b951_5122_40d9_b665_a3629d747906.slice/crio-533df59606b8e8e8d306203b2375f8691601895212ceb78d71ea986dc4a410a2 WatchSource:0}: Error finding container 533df59606b8e8e8d306203b2375f8691601895212ceb78d71ea986dc4a410a2: Status 404 returned error can't find the container with id 533df59606b8e8e8d306203b2375f8691601895212ceb78d71ea986dc4a410a2 Nov 28 11:08:56 crc kubenswrapper[4772]: W1128 11:08:56.181566 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e42674_84df_48c0_a77f_35afeb169848.slice/crio-8da10f54b6b32f7cbc851a2fd5da5b41108949ec538175a341aad3c807a27926 WatchSource:0}: Error finding container 8da10f54b6b32f7cbc851a2fd5da5b41108949ec538175a341aad3c807a27926: Status 404 returned error can't find the container with id 8da10f54b6b32f7cbc851a2fd5da5b41108949ec538175a341aad3c807a27926 Nov 28 11:08:56 crc kubenswrapper[4772]: W1128 11:08:56.182035 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a4baf8b_1e57_43cb_972a_b3ad73d3a192.slice/crio-a993de71da64b0639dba18cf753674c0e25cc1432990e628d0c4009f3ba062c2 WatchSource:0}: Error finding container a993de71da64b0639dba18cf753674c0e25cc1432990e628d0c4009f3ba062c2: Status 404 returned error can't find the container with id a993de71da64b0639dba18cf753674c0e25cc1432990e628d0c4009f3ba062c2 Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.183299 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.191492 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.192601 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psmlt\" (UniqueName: \"kubernetes.io/projected/ee7591a8-5999-4c4f-a0c8-de4fc90ac59d-kube-api-access-psmlt\") pod \"machine-approver-56656f9798-g6nz6\" (UID: \"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.207085 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wwxmx" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.211333 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfl26\" (UniqueName: \"kubernetes.io/projected/8e4ff33b-3159-4449-87fa-36031498cfbc-kube-api-access-rfl26\") pod \"console-operator-58897d9998-xq8v7\" (UID: \"8e4ff33b-3159-4449-87fa-36031498cfbc\") " pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.216254 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bftgd" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.273730 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jjs\" (UniqueName: \"kubernetes.io/projected/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-kube-api-access-45jjs\") pod \"oauth-openshift-558db77b4-vsrmr\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.277557 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:56 crc kubenswrapper[4772]: E1128 11:08:56.278176 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:56.778161362 +0000 UTC m=+135.101404589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.285148 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l92fb\" (UniqueName: \"kubernetes.io/projected/b36f0f60-82ce-49f5-b23b-2f585244e8db-kube-api-access-l92fb\") pod \"ingress-operator-5b745b69d9-k8l6n\" (UID: \"b36f0f60-82ce-49f5-b23b-2f585244e8db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.289420 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cngz\" (UniqueName: \"kubernetes.io/projected/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-kube-api-access-6cngz\") pod \"console-f9d7485db-xffg6\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.295499 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.296841 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.304729 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l8xps"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.304769 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pv78h"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.311031 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfbgx\" (UniqueName: \"kubernetes.io/projected/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-kube-api-access-qfbgx\") pod \"marketplace-operator-79b997595-rnhdf\" (UID: \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\") " pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.311270 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.331874 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzgk4\" (UniqueName: \"kubernetes.io/projected/bdb050b4-13af-40be-903a-2c5b1e233d7b-kube-api-access-hzgk4\") pod \"service-ca-9c57cc56f-tfv7p\" (UID: \"bdb050b4-13af-40be-903a-2c5b1e233d7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.351340 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h925x\" (UniqueName: \"kubernetes.io/projected/8903d4eb-b874-4abf-86b6-e44e7bd951b0-kube-api-access-h925x\") pod \"openshift-apiserver-operator-796bbdcf4f-fs2tk\" (UID: \"8903d4eb-b874-4abf-86b6-e44e7bd951b0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.367063 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.367111 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-p6f5q"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.371461 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.380310 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:56 crc kubenswrapper[4772]: E1128 11:08:56.380768 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:56.880751561 +0000 UTC m=+135.203994788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.382696 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.383347 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5zgg\" (UniqueName: \"kubernetes.io/projected/a84795d5-4c6b-49d8-b2a8-fe8a086bcf52-kube-api-access-z5zgg\") pod \"packageserver-d55dfcdfc-j7mqc\" (UID: \"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.402331 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpqhx\" (UniqueName: \"kubernetes.io/projected/e08207c7-a494-42f9-b59a-946ca0134a24-kube-api-access-qpqhx\") pod \"dns-operator-744455d44c-n9b9t\" (UID: \"e08207c7-a494-42f9-b59a-946ca0134a24\") " pod="openshift-dns-operator/dns-operator-744455d44c-n9b9t" Nov 28 11:08:56 crc kubenswrapper[4772]: W1128 11:08:56.402830 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98dacb5b_7077_4230_a992_5370a0f0f44e.slice/crio-9fb5575eb23895fb19f0864f55ef3efd94a3e143775d0078676ed87cb1402278 WatchSource:0}: Error finding container 9fb5575eb23895fb19f0864f55ef3efd94a3e143775d0078676ed87cb1402278: Status 404 returned error can't find the container with id 9fb5575eb23895fb19f0864f55ef3efd94a3e143775d0078676ed87cb1402278 Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.403728 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:56 crc kubenswrapper[4772]: W1128 11:08:56.406163 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode836396e_bc34_4178_aa3e_94ce5799b2fa.slice/crio-8512d1fb632895640763e3422cb2325cfdaff1cdb34407828c8d5068eaccce6d WatchSource:0}: Error finding container 8512d1fb632895640763e3422cb2325cfdaff1cdb34407828c8d5068eaccce6d: Status 404 returned error can't find the container with id 8512d1fb632895640763e3422cb2325cfdaff1cdb34407828c8d5068eaccce6d Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.418590 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twt8h\" (UniqueName: \"kubernetes.io/projected/502e7773-b2fc-46ea-97be-bd253f318bcd-kube-api-access-twt8h\") pod \"ingress-canary-tbqqf\" (UID: \"502e7773-b2fc-46ea-97be-bd253f318bcd\") " pod="openshift-ingress-canary/ingress-canary-tbqqf" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.431898 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shp67\" (UniqueName: \"kubernetes.io/projected/24396734-e237-4fb3-9cae-8c08db3a9122-kube-api-access-shp67\") pod \"collect-profiles-29405460-hh2nf\" (UID: \"24396734-e237-4fb3-9cae-8c08db3a9122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.453475 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z7ck\" (UniqueName: \"kubernetes.io/projected/c687f311-bc59-4115-8bd9-46f90c85136f-kube-api-access-5z7ck\") pod \"machine-config-server-bqx72\" (UID: \"c687f311-bc59-4115-8bd9-46f90c85136f\") " pod="openshift-machine-config-operator/machine-config-server-bqx72" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.461472 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dg24" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.468013 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.474317 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wmgp\" (UniqueName: \"kubernetes.io/projected/a5b41370-17d7-4d6d-b93b-7d10b6403e53-kube-api-access-6wmgp\") pod \"csi-hostpathplugin-sps4h\" (UID: \"a5b41370-17d7-4d6d-b93b-7d10b6403e53\") " pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.482878 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:56 crc kubenswrapper[4772]: E1128 11:08:56.483283 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:56.983257976 +0000 UTC m=+135.306501203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.489812 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbcxr\" (UniqueName: \"kubernetes.io/projected/1324dfe7-848a-4347-b1ce-6e80f0c20d0c-kube-api-access-dbcxr\") pod \"service-ca-operator-777779d784-wh4vl\" (UID: \"1324dfe7-848a-4347-b1ce-6e80f0c20d0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.498980 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:08:56 crc kubenswrapper[4772]: W1128 11:08:56.513181 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88914887_f24a_4852_9a3e_603b1db2b5b5.slice/crio-65b5f3d41adbb0ddbaf71cc56c8bb0faf023de005bbfe7a2e5e7b81b4c3ee74b WatchSource:0}: Error finding container 65b5f3d41adbb0ddbaf71cc56c8bb0faf023de005bbfe7a2e5e7b81b4c3ee74b: Status 404 returned error can't find the container with id 65b5f3d41adbb0ddbaf71cc56c8bb0faf023de005bbfe7a2e5e7b81b4c3ee74b Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.513594 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zn47\" (UniqueName: \"kubernetes.io/projected/5514e80d-3def-4db9-90cc-67918bfa211a-kube-api-access-6zn47\") pod \"dns-default-prq4h\" (UID: \"5514e80d-3def-4db9-90cc-67918bfa211a\") " pod="openshift-dns/dns-default-prq4h" Nov 28 11:08:56 crc kubenswrapper[4772]: W1128 11:08:56.519699 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3d3606e_c8a1_4c96_b66c_14ec79268ef1.slice/crio-f78f5307a463a00e56703eab8ea922d1424cc1c289e07da29a159301db7a3795 WatchSource:0}: Error finding container f78f5307a463a00e56703eab8ea922d1424cc1c289e07da29a159301db7a3795: Status 404 returned error can't find the container with id f78f5307a463a00e56703eab8ea922d1424cc1c289e07da29a159301db7a3795 Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.525706 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.531936 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.548566 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.548996 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n9b9t" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.556141 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.563956 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.570108 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.577262 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.578926 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.584260 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-prq4h" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.590266 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:56 crc kubenswrapper[4772]: E1128 11:08:56.590730 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:57.090683518 +0000 UTC m=+135.413926745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.592488 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bqx72" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.601102 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tbqqf" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.625605 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sps4h" Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.693890 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:56 crc kubenswrapper[4772]: E1128 11:08:56.694187 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:57.194167215 +0000 UTC m=+135.517410442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.715072 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" event={"ID":"a5abbb76-72b7-477e-8c59-d74fc3333188","Type":"ContainerStarted","Data":"68cd02ce62f00f8df7c6317208b70f5f219720ccf4ac1347712dcfd98dc7f545"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.724210 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" event={"ID":"e836396e-bc34-4178-aa3e-94ce5799b2fa","Type":"ContainerStarted","Data":"8512d1fb632895640763e3422cb2325cfdaff1cdb34407828c8d5068eaccce6d"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.725752 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" event={"ID":"f6e42674-84df-48c0-a77f-35afeb169848","Type":"ContainerStarted","Data":"8da10f54b6b32f7cbc851a2fd5da5b41108949ec538175a341aad3c807a27926"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.726910 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" event={"ID":"b3d3606e-c8a1-4c96-b66c-14ec79268ef1","Type":"ContainerStarted","Data":"f78f5307a463a00e56703eab8ea922d1424cc1c289e07da29a159301db7a3795"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.728046 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" event={"ID":"88914887-f24a-4852-9a3e-603b1db2b5b5","Type":"ContainerStarted","Data":"65b5f3d41adbb0ddbaf71cc56c8bb0faf023de005bbfe7a2e5e7b81b4c3ee74b"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.732614 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pjlzq" event={"ID":"9f10a259-c46a-4325-8b94-133ebbc6041a","Type":"ContainerStarted","Data":"eca3cb7a92cb1138f3d1b9d71e27d132bacd95a008890ee48ae43c1ed0c69e7e"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.732646 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pjlzq" event={"ID":"9f10a259-c46a-4325-8b94-133ebbc6041a","Type":"ContainerStarted","Data":"b576f0550de6ddd4f76f31fd26a6c54b4de35aa9f71b8927d0fa4b1562db453c"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.738395 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" event={"ID":"7a4baf8b-1e57-43cb-972a-b3ad73d3a192","Type":"ContainerStarted","Data":"a993de71da64b0639dba18cf753674c0e25cc1432990e628d0c4009f3ba062c2"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.745502 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.757463 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" event={"ID":"8366b951-5122-40d9-b665-a3629d747906","Type":"ContainerStarted","Data":"533df59606b8e8e8d306203b2375f8691601895212ceb78d71ea986dc4a410a2"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.762309 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" event={"ID":"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3","Type":"ContainerStarted","Data":"cf234bb2af9fd6ac2826f78862fcdab3b669114c8f66e0f88ab878289edbc9ff"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.764007 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.769128 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" event={"ID":"29522aa2-ad8b-4fbe-b872-3abe137e7676","Type":"ContainerStarted","Data":"22f9d0a317d7eae622bc72871736e1d011fda8ea818eb410e1bec67b1abf74e2"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.772756 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" event={"ID":"a94fd25b-ca6a-4afe-922a-61ebfba248ed","Type":"ContainerStarted","Data":"77ced7fab3d5a538d86efbdc09b3e1ccfb3dca350d8f05396687b94a34261f0d"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.779828 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" event={"ID":"98dacb5b-7077-4230-a992-5370a0f0f44e","Type":"ContainerStarted","Data":"9fb5575eb23895fb19f0864f55ef3efd94a3e143775d0078676ed87cb1402278"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.787536 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" event={"ID":"93e2a709-a456-4a26-a483-3f1ece08f4fe","Type":"ContainerStarted","Data":"0f841cfbec7f5885ac07c074ce7dcfd61008dd54f193752d27ad72406413913d"} Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.795296 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:56 crc kubenswrapper[4772]: E1128 11:08:56.795638 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:57.295623767 +0000 UTC m=+135.618866994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.815294 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.817509 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.895865 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:56 crc kubenswrapper[4772]: E1128 11:08:56.896576 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:57.396557283 +0000 UTC m=+135.719800510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.940335 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.964299 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm"] Nov 28 11:08:56 crc kubenswrapper[4772]: I1128 11:08:56.998151 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:56 crc kubenswrapper[4772]: E1128 11:08:56.998762 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:57.498748299 +0000 UTC m=+135.821991526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:57 crc kubenswrapper[4772]: W1128 11:08:57.032141 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3999e53_dcc7_4e16_8463_de30fb7fcbf6.slice/crio-d78ff71aaeb24ec84f9deaa02b222117211589ebc3b8511bf0acf8bbce782fa1 WatchSource:0}: Error finding container d78ff71aaeb24ec84f9deaa02b222117211589ebc3b8511bf0acf8bbce782fa1: Status 404 returned error can't find the container with id d78ff71aaeb24ec84f9deaa02b222117211589ebc3b8511bf0acf8bbce782fa1 Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.100673 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:57 crc kubenswrapper[4772]: E1128 11:08:57.101054 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:57.601031688 +0000 UTC m=+135.924274915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.123728 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wwxmx"] Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.124330 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bftgd"] Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.202628 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2dg24"] Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.204751 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:57 crc kubenswrapper[4772]: E1128 11:08:57.205133 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:57.705119494 +0000 UTC m=+136.028362721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.218814 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k"] Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.268881 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xq8v7"] Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.274753 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj"] Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.305636 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:57 crc kubenswrapper[4772]: E1128 11:08:57.305965 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:57.805951586 +0000 UTC m=+136.129194813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.420078 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:57 crc kubenswrapper[4772]: E1128 11:08:57.420378 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:57.920351209 +0000 UTC m=+136.243594436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.510922 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tfv7p"] Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.522542 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:57 crc kubenswrapper[4772]: E1128 11:08:57.523391 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:58.023271788 +0000 UTC m=+136.346515015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.624415 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:57 crc kubenswrapper[4772]: E1128 11:08:57.624725 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:58.12471259 +0000 UTC m=+136.447955817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.653603 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.703789 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:08:57 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:08:57 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:08:57 crc kubenswrapper[4772]: healthz check failed Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.703838 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.726807 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:57 crc kubenswrapper[4772]: E1128 11:08:57.727144 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:58.227127363 +0000 UTC m=+136.550370600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.802286 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pjlzq" podStartSLOduration=117.802264912 podStartE2EDuration="1m57.802264912s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:57.800801825 +0000 UTC m=+136.124045062" watchObservedRunningTime="2025-11-28 11:08:57.802264912 +0000 UTC m=+136.125508139" Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.827690 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.827915 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xffg6"] Nov 28 11:08:57 crc kubenswrapper[4772]: E1128 11:08:57.828299 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:58.328284476 +0000 UTC m=+136.651527703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.851895 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" event={"ID":"bdb050b4-13af-40be-903a-2c5b1e233d7b","Type":"ContainerStarted","Data":"7351ad879d828f92c8e0f23c729636397c860d0ba1ca3fcb616a4659b2c91ac1"} Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.854560 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7" event={"ID":"d827072f-b633-42a9-b6c0-1f515508f488","Type":"ContainerStarted","Data":"797f0246a80d61bf879d9c98af308d56a6ab6316eb3084462245d045c75cc858"} Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.858518 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" event={"ID":"b3d3606e-c8a1-4c96-b66c-14ec79268ef1","Type":"ContainerStarted","Data":"03cd417f57735fb512c05608f2e0e53d0775aedee893f24c7446ed397df92220"} Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.882015 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" event={"ID":"29522aa2-ad8b-4fbe-b872-3abe137e7676","Type":"ContainerStarted","Data":"c9ff2940cbb6555e7ebe39df9a464a84e6cc91c05e3623556dfc58369b4f9181"} Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.882594 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.883726 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" event={"ID":"6310d4f4-cea5-4b44-a388-193d13bc5ec7","Type":"ContainerStarted","Data":"15c5a65e3c450cca75729f97d8c5f44038295b711b7e6663f13292e2da471775"} Nov 28 11:08:57 crc kubenswrapper[4772]: W1128 11:08:57.884970 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c2f8de4_e5c0_493a_b16f_b415832ba9bd.slice/crio-abda2fc9fc65e4e7ae6b733d7b84229c89711a50c49d2616fc424bd45e149ef0 WatchSource:0}: Error finding container abda2fc9fc65e4e7ae6b733d7b84229c89711a50c49d2616fc424bd45e149ef0: Status 404 returned error can't find the container with id abda2fc9fc65e4e7ae6b733d7b84229c89711a50c49d2616fc424bd45e149ef0 Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.892801 4772 generic.go:334] "Generic (PLEG): container finished" podID="98dacb5b-7077-4230-a992-5370a0f0f44e" containerID="3113a01888dedf2840ca2d5c86924ba019da014cdeaeffc6bef2fca762bd2362" exitCode=0 Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.895243 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" event={"ID":"98dacb5b-7077-4230-a992-5370a0f0f44e","Type":"ContainerDied","Data":"3113a01888dedf2840ca2d5c86924ba019da014cdeaeffc6bef2fca762bd2362"} Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.895308 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.903111 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" event={"ID":"e836396e-bc34-4178-aa3e-94ce5799b2fa","Type":"ContainerStarted","Data":"8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874"} Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.903768 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.909852 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" event={"ID":"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3","Type":"ContainerStarted","Data":"f46315c147df88159e516b6d48633aadbf34dbd89dea7a822b9cfb38702e4e5f"} Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.910566 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" event={"ID":"d3999e53-dcc7-4e16-8463-de30fb7fcbf6","Type":"ContainerStarted","Data":"d78ff71aaeb24ec84f9deaa02b222117211589ebc3b8511bf0acf8bbce782fa1"} Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.918037 4772 generic.go:334] "Generic (PLEG): container finished" podID="93e2a709-a456-4a26-a483-3f1ece08f4fe" containerID="ef1b49d578818ad64b29ea997955be4c6df7674cfce86ca3324f4263885fa3dc" exitCode=0 Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.918094 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" event={"ID":"93e2a709-a456-4a26-a483-3f1ece08f4fe","Type":"ContainerDied","Data":"ef1b49d578818ad64b29ea997955be4c6df7674cfce86ca3324f4263885fa3dc"} Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.921731 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.930625 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:57 crc kubenswrapper[4772]: E1128 11:08:57.932098 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:58.432069172 +0000 UTC m=+136.755312399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.935151 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" event={"ID":"f18077e0-f3d3-4327-a384-321e3d9b9f78","Type":"ContainerStarted","Data":"74a30d04ea8480426fb706f58c74d46ba60f81820d0da61874c6fa3fe9890944"} Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.939943 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:57 crc kubenswrapper[4772]: E1128 11:08:57.942324 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:58.442308476 +0000 UTC m=+136.765551703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.948084 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" event={"ID":"f6e42674-84df-48c0-a77f-35afeb169848","Type":"ContainerStarted","Data":"fe8bb63fe46834f9cbb71cde76c1185a044c8441c02ab8769ef2a817ebfa8a75"} Nov 28 11:08:57 crc kubenswrapper[4772]: I1128 11:08:57.989684 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bqx72" event={"ID":"c687f311-bc59-4115-8bd9-46f90c85136f","Type":"ContainerStarted","Data":"6fded51d517901a2228ba5fa24fc86e466f1da929d1ab9374d9c5593dc41e069"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.031098 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" podStartSLOduration=118.031083367 podStartE2EDuration="1m58.031083367s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:58.028771894 +0000 UTC m=+136.352015121" watchObservedRunningTime="2025-11-28 11:08:58.031083367 +0000 UTC m=+136.354326594" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.044479 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" event={"ID":"facbc367-fb0c-466a-a01e-8f1118db6fcb","Type":"ContainerStarted","Data":"3580640e986e18ecdc494017b23a70cf64e2611eab9c2ddf8b263eb1f36b6dbd"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.044519 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wwxmx" event={"ID":"9078e378-b549-4e5e-82ee-b9a96cf8e4da","Type":"ContainerStarted","Data":"997d12eaf16cffbf531f5a5270e3c5309981cd55491370252e42525ea6f36813"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.044529 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bftgd" event={"ID":"a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f","Type":"ContainerStarted","Data":"f13d760c4c0d883463367f3b1d9987a6d4fb3739a84d9f7eae528e3c5ee4aa86"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.044539 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" event={"ID":"8366b951-5122-40d9-b665-a3629d747906","Type":"ContainerStarted","Data":"459bc3e5429a0d093331c4e57f1b02c16c8472cda416d5bdabf10fcdae196250"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.045127 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.045679 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:58.545664239 +0000 UTC m=+136.868907466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.053581 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" event={"ID":"a94fd25b-ca6a-4afe-922a-61ebfba248ed","Type":"ContainerStarted","Data":"ce50e0e6a6ae7d13b905c7874b0b6765f8231a81c801c19576eb90edbf5f3eff"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.054253 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.058242 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" event={"ID":"f1ac9e8a-e68c-4d22-8841-3f36863b0574","Type":"ContainerStarted","Data":"f75c97e7a657eb0514ecee0389e3e2bc615c8a11488570c9bd87767d6fa025be"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.066288 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.067276 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" event={"ID":"a44f7c6d-d509-4a18-9fd9-47419d54af4b","Type":"ContainerStarted","Data":"aeb15f7e8510fa60c4d8b8d558ec4d92d0beca062923adae0c6fceae628c935b"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.069995 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" event={"ID":"99dd4351-6bbc-4e6e-b08f-88de7315987b","Type":"ContainerStarted","Data":"8020b85b08594ed557fc4201e689fb5afaf5e7ab8a033eb89d21944a49e4c4bd"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.071056 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dg24" event={"ID":"048a22d4-da9f-4fc6-837a-c5398965a0f0","Type":"ContainerStarted","Data":"d1fa8b428303a2e5481ebe155cbcd23a93193fc6ff8275bdceb5712728164b3f"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.095725 4772 generic.go:334] "Generic (PLEG): container finished" podID="7a4baf8b-1e57-43cb-972a-b3ad73d3a192" containerID="0c71e2f65bc757d109f3b99015f64733e59a3a544ac628791c8420587a2f8de0" exitCode=0 Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.096599 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" event={"ID":"7a4baf8b-1e57-43cb-972a-b3ad73d3a192","Type":"ContainerDied","Data":"0c71e2f65bc757d109f3b99015f64733e59a3a544ac628791c8420587a2f8de0"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.130352 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k" event={"ID":"8b29fa00-3205-4f7c-8f5f-671c7921029b","Type":"ContainerStarted","Data":"29505f525fafae35bf42ba7284460ff89bf6f95cb1b1c86999711f1ae58275b8"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.141890 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tbqqf"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.148623 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.149450 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:58.649429614 +0000 UTC m=+136.972672841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.168697 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xq8v7" event={"ID":"8e4ff33b-3159-4449-87fa-36031498cfbc","Type":"ContainerStarted","Data":"e0ef04917825ba6a3d9670a41d2d205d0ca3f1d76fc9e3f1c349d9cdc88da8ba"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.180629 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n9b9t"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.190639 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" event={"ID":"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d","Type":"ContainerStarted","Data":"2400ee382ea1c250e4545d7affbe1e0e3e77b5bed2594e764759383f0e50c2f0"} Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.194981 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.218751 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztvsg" podStartSLOduration=118.218727979 podStartE2EDuration="1m58.218727979s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:58.13128984 +0000 UTC m=+136.454533067" watchObservedRunningTime="2025-11-28 11:08:58.218727979 +0000 UTC m=+136.541971206" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.224865 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-prq4h"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.227571 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vsrmr"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.255694 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.256517 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:58.756494605 +0000 UTC m=+137.079737842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.257900 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c5t26" podStartSLOduration=117.257884719 podStartE2EDuration="1m57.257884719s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:58.243122231 +0000 UTC m=+136.566365478" watchObservedRunningTime="2025-11-28 11:08:58.257884719 +0000 UTC m=+136.581127946" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.261264 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sps4h"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.264139 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wt2vm" podStartSLOduration=117.264119106 podStartE2EDuration="1m57.264119106s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:58.260017106 +0000 UTC m=+136.583260333" watchObservedRunningTime="2025-11-28 11:08:58.264119106 +0000 UTC m=+136.587362333" Nov 28 11:08:58 crc kubenswrapper[4772]: W1128 11:08:58.268521 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5514e80d_3def_4db9_90cc_67918bfa211a.slice/crio-cf9f24143edd8f4ae886a98f4d0fda7556a695497e3e1a33938f036963cacdb4 WatchSource:0}: Error finding container cf9f24143edd8f4ae886a98f4d0fda7556a695497e3e1a33938f036963cacdb4: Status 404 returned error can't find the container with id cf9f24143edd8f4ae886a98f4d0fda7556a695497e3e1a33938f036963cacdb4 Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.268555 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.305933 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" podStartSLOduration=117.305918619 podStartE2EDuration="1m57.305918619s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:58.305801116 +0000 UTC m=+136.629044353" watchObservedRunningTime="2025-11-28 11:08:58.305918619 +0000 UTC m=+136.629161846" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.350501 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xntkm" podStartSLOduration=118.348502868 podStartE2EDuration="1m58.348502868s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:58.348329702 +0000 UTC m=+136.671572939" watchObservedRunningTime="2025-11-28 11:08:58.348502868 +0000 UTC m=+136.671746095" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.357822 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.358759 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:58.858746572 +0000 UTC m=+137.181989799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.470817 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.471045 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:58.971015887 +0000 UTC m=+137.294259114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.471296 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.471708 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:58.971698229 +0000 UTC m=+137.294941466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.572171 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.572301 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:59.072286934 +0000 UTC m=+137.395530161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.572593 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.572880 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:59.072872852 +0000 UTC m=+137.396116079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:58 crc kubenswrapper[4772]: W1128 11:08:58.635842 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8903d4eb_b874_4abf_86b6_e44e7bd951b0.slice/crio-7310c87bacfbdbe014e7d3edc7189147cee3438db9cfb26e8d9902021c7d096a WatchSource:0}: Error finding container 7310c87bacfbdbe014e7d3edc7189147cee3438db9cfb26e8d9902021c7d096a: Status 404 returned error can't find the container with id 7310c87bacfbdbe014e7d3edc7189147cee3438db9cfb26e8d9902021c7d096a Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.660607 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.661789 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.667875 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:08:58 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:08:58 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:08:58 crc kubenswrapper[4772]: healthz check failed Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.667926 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.673908 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.674292 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:59.174275033 +0000 UTC m=+137.497518260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.674845 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.679448 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rnhdf"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.690882 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g6gqv"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.691802 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.694149 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.705899 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6gqv"] Nov 28 11:08:58 crc kubenswrapper[4772]: W1128 11:08:58.727195 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24396734_e237_4fb3_9cae_8c08db3a9122.slice/crio-44eff93824f19e901dd8c23110fc8900373cc83a04c249c949a3b963839b5b77 WatchSource:0}: Error finding container 44eff93824f19e901dd8c23110fc8900373cc83a04c249c949a3b963839b5b77: Status 404 returned error can't find the container with id 44eff93824f19e901dd8c23110fc8900373cc83a04c249c949a3b963839b5b77 Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.777644 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.777707 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz6lt\" (UniqueName: \"kubernetes.io/projected/7b72c082-4b2a-4357-b95c-51d456028f86-kube-api-access-lz6lt\") pod \"certified-operators-g6gqv\" (UID: \"7b72c082-4b2a-4357-b95c-51d456028f86\") " pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.777754 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b72c082-4b2a-4357-b95c-51d456028f86-catalog-content\") pod \"certified-operators-g6gqv\" (UID: \"7b72c082-4b2a-4357-b95c-51d456028f86\") " pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.777779 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b72c082-4b2a-4357-b95c-51d456028f86-utilities\") pod \"certified-operators-g6gqv\" (UID: \"7b72c082-4b2a-4357-b95c-51d456028f86\") " pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.778189 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:59.278175123 +0000 UTC m=+137.601418350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.888001 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.888275 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:59.388249698 +0000 UTC m=+137.711492915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.888526 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.888562 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz6lt\" (UniqueName: \"kubernetes.io/projected/7b72c082-4b2a-4357-b95c-51d456028f86-kube-api-access-lz6lt\") pod \"certified-operators-g6gqv\" (UID: \"7b72c082-4b2a-4357-b95c-51d456028f86\") " pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.888595 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b72c082-4b2a-4357-b95c-51d456028f86-catalog-content\") pod \"certified-operators-g6gqv\" (UID: \"7b72c082-4b2a-4357-b95c-51d456028f86\") " pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.888619 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b72c082-4b2a-4357-b95c-51d456028f86-utilities\") pod \"certified-operators-g6gqv\" (UID: \"7b72c082-4b2a-4357-b95c-51d456028f86\") " pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.888902 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:59.388884668 +0000 UTC m=+137.712127895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.889006 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b72c082-4b2a-4357-b95c-51d456028f86-utilities\") pod \"certified-operators-g6gqv\" (UID: \"7b72c082-4b2a-4357-b95c-51d456028f86\") " pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.889217 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b72c082-4b2a-4357-b95c-51d456028f86-catalog-content\") pod \"certified-operators-g6gqv\" (UID: \"7b72c082-4b2a-4357-b95c-51d456028f86\") " pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.890380 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dkfw4"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.891300 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.896349 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.912317 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dkfw4"] Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.949914 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz6lt\" (UniqueName: \"kubernetes.io/projected/7b72c082-4b2a-4357-b95c-51d456028f86-kube-api-access-lz6lt\") pod \"certified-operators-g6gqv\" (UID: \"7b72c082-4b2a-4357-b95c-51d456028f86\") " pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.990539 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.990937 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-utilities\") pod \"community-operators-dkfw4\" (UID: \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\") " pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.990977 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-catalog-content\") pod \"community-operators-dkfw4\" (UID: \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\") " pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:08:58 crc kubenswrapper[4772]: I1128 11:08:58.991061 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxbfz\" (UniqueName: \"kubernetes.io/projected/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-kube-api-access-fxbfz\") pod \"community-operators-dkfw4\" (UID: \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\") " pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:08:58 crc kubenswrapper[4772]: E1128 11:08:58.991236 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:59.491211739 +0000 UTC m=+137.814454966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.109493 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxbfz\" (UniqueName: \"kubernetes.io/projected/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-kube-api-access-fxbfz\") pod \"community-operators-dkfw4\" (UID: \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\") " pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.111227 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-utilities\") pod \"community-operators-dkfw4\" (UID: \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\") " pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.111294 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-catalog-content\") pod \"community-operators-dkfw4\" (UID: \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\") " pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.111505 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:59 crc kubenswrapper[4772]: E1128 11:08:59.112143 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:59.612126407 +0000 UTC m=+137.935369634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.112714 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-utilities\") pod \"community-operators-dkfw4\" (UID: \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\") " pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.113091 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-catalog-content\") pod \"community-operators-dkfw4\" (UID: \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\") " pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.146166 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d6mbw"] Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.150817 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.151175 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.174945 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d6mbw"] Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.192464 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxbfz\" (UniqueName: \"kubernetes.io/projected/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-kube-api-access-fxbfz\") pod \"community-operators-dkfw4\" (UID: \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\") " pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.214033 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.214154 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7p7h\" (UniqueName: \"kubernetes.io/projected/fa914cfd-35f4-469e-a762-bc1dccea9f23-kube-api-access-g7p7h\") pod \"certified-operators-d6mbw\" (UID: \"fa914cfd-35f4-469e-a762-bc1dccea9f23\") " pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.214239 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa914cfd-35f4-469e-a762-bc1dccea9f23-utilities\") pod \"certified-operators-d6mbw\" (UID: \"fa914cfd-35f4-469e-a762-bc1dccea9f23\") " pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.214282 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa914cfd-35f4-469e-a762-bc1dccea9f23-catalog-content\") pod \"certified-operators-d6mbw\" (UID: \"fa914cfd-35f4-469e-a762-bc1dccea9f23\") " pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:08:59 crc kubenswrapper[4772]: E1128 11:08:59.214392 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:59.714377725 +0000 UTC m=+138.037620942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.227661 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.239967 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" event={"ID":"d593cf3a-ced7-4f3a-a15a-10c3309a2ee3","Type":"ContainerStarted","Data":"14ff44f088f16ef9a51524bd3718f95ec2ec9196fe7cf04adf556fa58ca90fa4"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.285826 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" event={"ID":"facbc367-fb0c-466a-a01e-8f1118db6fcb","Type":"ContainerStarted","Data":"5d4d941c62cdd9fe489f8bc2a51a37e5ada9ccb32cf8eaa632deb74d91ced500"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.308688 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4ct44" podStartSLOduration=118.30867265 podStartE2EDuration="1m58.30867265s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:59.282863093 +0000 UTC m=+137.606106320" watchObservedRunningTime="2025-11-28 11:08:59.30867265 +0000 UTC m=+137.631915877" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.310000 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4b2ps"] Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.311013 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.316069 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7p7h\" (UniqueName: \"kubernetes.io/projected/fa914cfd-35f4-469e-a762-bc1dccea9f23-kube-api-access-g7p7h\") pod \"certified-operators-d6mbw\" (UID: \"fa914cfd-35f4-469e-a762-bc1dccea9f23\") " pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.316175 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.316247 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa914cfd-35f4-469e-a762-bc1dccea9f23-utilities\") pod \"certified-operators-d6mbw\" (UID: \"fa914cfd-35f4-469e-a762-bc1dccea9f23\") " pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.316322 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa914cfd-35f4-469e-a762-bc1dccea9f23-catalog-content\") pod \"certified-operators-d6mbw\" (UID: \"fa914cfd-35f4-469e-a762-bc1dccea9f23\") " pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.316798 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa914cfd-35f4-469e-a762-bc1dccea9f23-catalog-content\") pod \"certified-operators-d6mbw\" (UID: \"fa914cfd-35f4-469e-a762-bc1dccea9f23\") " pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.317070 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa914cfd-35f4-469e-a762-bc1dccea9f23-utilities\") pod \"certified-operators-d6mbw\" (UID: \"fa914cfd-35f4-469e-a762-bc1dccea9f23\") " pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:08:59 crc kubenswrapper[4772]: E1128 11:08:59.317328 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:08:59.817316054 +0000 UTC m=+138.140559281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.333476 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4b2ps"] Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.334728 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n6lq9" podStartSLOduration=118.334710595 podStartE2EDuration="1m58.334710595s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:59.330527532 +0000 UTC m=+137.653770759" watchObservedRunningTime="2025-11-28 11:08:59.334710595 +0000 UTC m=+137.657953822" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.339702 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" event={"ID":"88914887-f24a-4852-9a3e-603b1db2b5b5","Type":"ContainerStarted","Data":"91d0127265a32ec96d23b369c3ab9f9c54e4705dafd67af273d25d74812608e3"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.391721 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7p7h\" (UniqueName: \"kubernetes.io/projected/fa914cfd-35f4-469e-a762-bc1dccea9f23-kube-api-access-g7p7h\") pod \"certified-operators-d6mbw\" (UID: \"fa914cfd-35f4-469e-a762-bc1dccea9f23\") " pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.425632 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" event={"ID":"8903d4eb-b874-4abf-86b6-e44e7bd951b0","Type":"ContainerStarted","Data":"7310c87bacfbdbe014e7d3edc7189147cee3438db9cfb26e8d9902021c7d096a"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.426774 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.426914 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hbvb\" (UniqueName: \"kubernetes.io/projected/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-kube-api-access-5hbvb\") pod \"community-operators-4b2ps\" (UID: \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\") " pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.426975 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-catalog-content\") pod \"community-operators-4b2ps\" (UID: \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\") " pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.427053 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-utilities\") pod \"community-operators-4b2ps\" (UID: \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\") " pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:08:59 crc kubenswrapper[4772]: E1128 11:08:59.427849 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:08:59.927827953 +0000 UTC m=+138.251071180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.435476 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b5gcb" podStartSLOduration=118.435459985 podStartE2EDuration="1m58.435459985s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:59.433830433 +0000 UTC m=+137.757073660" watchObservedRunningTime="2025-11-28 11:08:59.435459985 +0000 UTC m=+137.758703212" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.489675 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" event={"ID":"f18077e0-f3d3-4327-a384-321e3d9b9f78","Type":"ContainerStarted","Data":"bc1d2775946eadc48f5aa71ba8956c8b4ee8cf306cea8b6efbce9ab77bf80912"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.497381 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n9b9t" event={"ID":"e08207c7-a494-42f9-b59a-946ca0134a24","Type":"ContainerStarted","Data":"e2dfb6ff7c2095815e73819c093d7d4759474e46223e4adce97358aa2a1fd655"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.500469 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xffg6" event={"ID":"7c2f8de4-e5c0-493a-b16f-b415832ba9bd","Type":"ContainerStarted","Data":"6868b2c91a5af5f3b4d99a1a44aa656d584859a2dcc5c8265444ee4bee3510e5"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.500511 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xffg6" event={"ID":"7c2f8de4-e5c0-493a-b16f-b415832ba9bd","Type":"ContainerStarted","Data":"abda2fc9fc65e4e7ae6b733d7b84229c89711a50c49d2616fc424bd45e149ef0"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.515662 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bftgd" event={"ID":"a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f","Type":"ContainerStarted","Data":"849d96e2900d5f15ffce84945272805f55f39aa14f7c4c0b491f55d50b444904"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.529185 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.529221 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-utilities\") pod \"community-operators-4b2ps\" (UID: \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\") " pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.529291 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hbvb\" (UniqueName: \"kubernetes.io/projected/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-kube-api-access-5hbvb\") pod \"community-operators-4b2ps\" (UID: \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\") " pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.529327 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-catalog-content\") pod \"community-operators-4b2ps\" (UID: \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\") " pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:08:59 crc kubenswrapper[4772]: E1128 11:08:59.529652 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:00.029641107 +0000 UTC m=+138.352884334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.530734 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-utilities\") pod \"community-operators-4b2ps\" (UID: \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\") " pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.530900 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-catalog-content\") pod \"community-operators-4b2ps\" (UID: \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\") " pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.543951 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.588664 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" event={"ID":"a5abbb76-72b7-477e-8c59-d74fc3333188","Type":"ContainerStarted","Data":"6c8450db8a124d4f2cc97fc9aaa984724b7d0221581e6966e31e14320c285a06"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.589231 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hbvb\" (UniqueName: \"kubernetes.io/projected/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-kube-api-access-5hbvb\") pod \"community-operators-4b2ps\" (UID: \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\") " pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.609505 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7" event={"ID":"d827072f-b633-42a9-b6c0-1f515508f488","Type":"ContainerStarted","Data":"31e591dc0137ac2c9671b268eaef9e72cdd1b1118b7e8a3161e35e060b6fbd9d"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.632928 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:59 crc kubenswrapper[4772]: E1128 11:08:59.634266 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:00.134235619 +0000 UTC m=+138.457478846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.648674 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xq8v7" event={"ID":"8e4ff33b-3159-4449-87fa-36031498cfbc","Type":"ContainerStarted","Data":"2f1d96eb83ceb37a76b8a8025c7b66b5cbdec8583e17729844a04b0883400b71"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.649965 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.663207 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-p6f5q" podStartSLOduration=119.663187415 podStartE2EDuration="1m59.663187415s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:59.650272196 +0000 UTC m=+137.973515423" watchObservedRunningTime="2025-11-28 11:08:59.663187415 +0000 UTC m=+137.986430642" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.663375 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-xffg6" podStartSLOduration=119.66335332 podStartE2EDuration="1m59.66335332s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:59.580171647 +0000 UTC m=+137.903414874" watchObservedRunningTime="2025-11-28 11:08:59.66335332 +0000 UTC m=+137.986596547" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.663558 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:08:59 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:08:59 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:08:59 crc kubenswrapper[4772]: healthz check failed Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.663832 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.664330 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.671776 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" event={"ID":"93e2a709-a456-4a26-a483-3f1ece08f4fe","Type":"ContainerStarted","Data":"26dd80d53e0cec0cbaa9ed2eed464d67a853636803484b9724314ac76a7ee3ab"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.717862 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" event={"ID":"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d","Type":"ContainerStarted","Data":"1f20392966d4cc8f41d7c2841cf7407b8179539cf781cd79c8bb08e608651bbb"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.734227 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:59 crc kubenswrapper[4772]: E1128 11:08:59.735718 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:00.235700741 +0000 UTC m=+138.558944018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.753650 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sps4h" event={"ID":"a5b41370-17d7-4d6d-b93b-7d10b6403e53","Type":"ContainerStarted","Data":"5aa4bd7661189a65a4bfc290e9efccc458c8882ee7a87c7ff068c4c3ea11810a"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.781894 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" podStartSLOduration=118.781879053 podStartE2EDuration="1m58.781879053s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:59.780944254 +0000 UTC m=+138.104187481" watchObservedRunningTime="2025-11-28 11:08:59.781879053 +0000 UTC m=+138.105122270" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.782149 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xq8v7" podStartSLOduration=119.782143401 podStartE2EDuration="1m59.782143401s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:08:59.744751967 +0000 UTC m=+138.067995194" watchObservedRunningTime="2025-11-28 11:08:59.782143401 +0000 UTC m=+138.105386628" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.831323 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" event={"ID":"b36f0f60-82ce-49f5-b23b-2f585244e8db","Type":"ContainerStarted","Data":"35fac57ca2cab0b2e166cabb5dd430649d11b9e21d69211971dfc06df5223903"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.840534 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:08:59 crc kubenswrapper[4772]: E1128 11:08:59.840860 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:00.34084536 +0000 UTC m=+138.664088587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.851873 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k" event={"ID":"8b29fa00-3205-4f7c-8f5f-671c7921029b","Type":"ContainerStarted","Data":"eadbecf41ea06b3c9ae51dc1ec3ae05740885b27828d1ef30d3688052b10bad0"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.888951 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" event={"ID":"98dacb5b-7077-4230-a992-5370a0f0f44e","Type":"ContainerStarted","Data":"48c4cf5d8c5dee9c0488bc221a2ec12626767fd059cd0da11b7b787fbd33f781"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.889097 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.945195 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:08:59 crc kubenswrapper[4772]: E1128 11:08:59.946287 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:00.446271408 +0000 UTC m=+138.769514635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.952242 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dg24" event={"ID":"048a22d4-da9f-4fc6-837a-c5398965a0f0","Type":"ContainerStarted","Data":"a484a782276b23759baaa9d0d2514d198f74f869cb90909beaa3ce6d45aba3c4"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.995653 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wwxmx" event={"ID":"9078e378-b549-4e5e-82ee-b9a96cf8e4da","Type":"ContainerStarted","Data":"e958179c334d1f9e5cf7a4bca63b4bd288c948ed6126ec8778bf0382fa98bd96"} Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.997459 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-wwxmx" Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.997582 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-wwxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Nov 28 11:08:59 crc kubenswrapper[4772]: I1128 11:08:59.997622 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wwxmx" podUID="9078e378-b549-4e5e-82ee-b9a96cf8e4da" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.049200 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8l8k" podStartSLOduration=119.049179087 podStartE2EDuration="1m59.049179087s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:00.045908003 +0000 UTC m=+138.369151230" watchObservedRunningTime="2025-11-28 11:09:00.049179087 +0000 UTC m=+138.372422334" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.053048 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:00 crc kubenswrapper[4772]: E1128 11:09:00.053755 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:00.553739911 +0000 UTC m=+138.876983128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.075592 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" event={"ID":"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb","Type":"ContainerStarted","Data":"1255d5214f68b6c5d78b3af3c8b7d4e23cce3c692011ceea45bb7dc6bbd8e1eb"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.107763 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" event={"ID":"99dd4351-6bbc-4e6e-b08f-88de7315987b","Type":"ContainerStarted","Data":"ec0a78c45017a5770b3579dcad8d3e44310575856a25dbf07a335a0173e1703f"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.109458 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.144603 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.157449 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:00 crc kubenswrapper[4772]: E1128 11:09:00.158525 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:00.658513369 +0000 UTC m=+138.981756596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.162884 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tbqqf" event={"ID":"502e7773-b2fc-46ea-97be-bd253f318bcd","Type":"ContainerStarted","Data":"f7db3582c7b0700f6ac8c07351d9978da010848d10b4bfebbdda5a3d36b1a823"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.162948 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tbqqf" event={"ID":"502e7773-b2fc-46ea-97be-bd253f318bcd","Type":"ContainerStarted","Data":"a8d1a6310cbe3f34129f2f8f130c27b0a8c4c8ae314500650bd73dff6eb9a47b"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.230766 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-wwxmx" podStartSLOduration=120.230747526 podStartE2EDuration="2m0.230747526s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:00.23055228 +0000 UTC m=+138.553795507" watchObservedRunningTime="2025-11-28 11:09:00.230747526 +0000 UTC m=+138.553990753" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.261096 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:00 crc kubenswrapper[4772]: E1128 11:09:00.261472 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:00.761446228 +0000 UTC m=+139.084689455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.261590 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:00 crc kubenswrapper[4772]: E1128 11:09:00.262804 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:00.762792391 +0000 UTC m=+139.086035618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.264938 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" event={"ID":"bdb050b4-13af-40be-903a-2c5b1e233d7b","Type":"ContainerStarted","Data":"f4a74b4b7e33dbf7e76fd91ba9285b9d2fdac9e126c17f7d010f2c7f6298b521"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.282688 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" event={"ID":"a44f7c6d-d509-4a18-9fd9-47419d54af4b","Type":"ContainerStarted","Data":"2e1d39d3bd6a26222e2ce4e1c9c15b46454f96d6ab0162218df7c03e2a367002"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.292946 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" podStartSLOduration=120.292912575 podStartE2EDuration="2m0.292912575s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:00.28803742 +0000 UTC m=+138.611280647" watchObservedRunningTime="2025-11-28 11:09:00.292912575 +0000 UTC m=+138.616155802" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.310880 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bm78k" podStartSLOduration=119.310864533 podStartE2EDuration="1m59.310864533s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:00.310595554 +0000 UTC m=+138.633838781" watchObservedRunningTime="2025-11-28 11:09:00.310864533 +0000 UTC m=+138.634107760" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.354898 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-tfv7p" podStartSLOduration=119.354882357 podStartE2EDuration="1m59.354882357s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:00.35466379 +0000 UTC m=+138.677907017" watchObservedRunningTime="2025-11-28 11:09:00.354882357 +0000 UTC m=+138.678125584" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.362840 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:00 crc kubenswrapper[4772]: E1128 11:09:00.363184 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:00.863160599 +0000 UTC m=+139.186403826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.415770 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" event={"ID":"d3999e53-dcc7-4e16-8463-de30fb7fcbf6","Type":"ContainerStarted","Data":"fac048861a08938342b903b51eae4ae543a50c40ca21559645181521ffddbf21"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.429801 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tbqqf" podStartSLOduration=7.429784898 podStartE2EDuration="7.429784898s" podCreationTimestamp="2025-11-28 11:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:00.395022298 +0000 UTC m=+138.718265525" watchObservedRunningTime="2025-11-28 11:09:00.429784898 +0000 UTC m=+138.753028125" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.431605 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6stnj" podStartSLOduration=120.431598196 podStartE2EDuration="2m0.431598196s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:00.429391716 +0000 UTC m=+138.752634943" watchObservedRunningTime="2025-11-28 11:09:00.431598196 +0000 UTC m=+138.754841423" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.464112 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:00 crc kubenswrapper[4772]: E1128 11:09:00.465528 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:00.96551311 +0000 UTC m=+139.288756327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.494576 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" event={"ID":"f1ac9e8a-e68c-4d22-8841-3f36863b0574","Type":"ContainerStarted","Data":"624caa110c9b08402a65a275e174498a0a5edbecd1656786b19c7d698a304fb4"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.495351 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.516693 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xq8v7" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.547991 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" podStartSLOduration=119.547974561 podStartE2EDuration="1m59.547974561s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:00.538896783 +0000 UTC m=+138.862140020" watchObservedRunningTime="2025-11-28 11:09:00.547974561 +0000 UTC m=+138.871217788" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.574939 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.575163 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" event={"ID":"6310d4f4-cea5-4b44-a388-193d13bc5ec7","Type":"ContainerStarted","Data":"2b31720462abbe9c7b075fe84353d9cdde261ed622f3521056f4081663e51fc9"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.575650 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.575917 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:09:00 crc kubenswrapper[4772]: E1128 11:09:00.576071 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:01.07605784 +0000 UTC m=+139.399301067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.594289 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" event={"ID":"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52","Type":"ContainerStarted","Data":"ebb70718c4f8e67d320788c0d26e0f0cd8ed13f2ebee7df3aee9eeca5cf997e4"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.596803 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" event={"ID":"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6","Type":"ContainerStarted","Data":"541b83c76470e2ebad5692aa0ac94a46cd20949c777da174adf96e302de078c2"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.597889 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.599233 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-prq4h" event={"ID":"5514e80d-3def-4db9-90cc-67918bfa211a","Type":"ContainerStarted","Data":"cf9f24143edd8f4ae886a98f4d0fda7556a695497e3e1a33938f036963cacdb4"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.623496 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" event={"ID":"24396734-e237-4fb3-9cae-8c08db3a9122","Type":"ContainerStarted","Data":"44eff93824f19e901dd8c23110fc8900373cc83a04c249c949a3b963839b5b77"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.631529 4772 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vsrmr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.37:6443/healthz\": dial tcp 10.217.0.37:6443: connect: connection refused" start-of-body= Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.631584 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" podUID="fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.37:6443/healthz\": dial tcp 10.217.0.37:6443: connect: connection refused" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.663318 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:00 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:00 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:00 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.663389 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.663779 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.677449 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.678880 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" event={"ID":"1324dfe7-848a-4347-b1ce-6e80f0c20d0c","Type":"ContainerStarted","Data":"d1e36de92bf160c254b84ad42a50fe90e4d5f99ae51f690f35fd52119e3c3bb9"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.679372 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" event={"ID":"1324dfe7-848a-4347-b1ce-6e80f0c20d0c","Type":"ContainerStarted","Data":"50886409db6aab2a8244b162a2b92c36fb3cfc0e0e862c456b07fd00c9b30def"} Nov 28 11:09:00 crc kubenswrapper[4772]: E1128 11:09:00.681010 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:01.180992593 +0000 UTC m=+139.504235900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.697981 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dkfw4"] Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.728781 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bqx72" event={"ID":"c687f311-bc59-4115-8bd9-46f90c85136f","Type":"ContainerStarted","Data":"c2d2962b74d7e8665159902b8e88b45e7107a642e946add59fa8222db4a4f9da"} Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.728823 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9j8qq"] Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.729921 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.740466 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" podStartSLOduration=120.740438895 podStartE2EDuration="2m0.740438895s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:00.718962845 +0000 UTC m=+139.042206062" watchObservedRunningTime="2025-11-28 11:09:00.740438895 +0000 UTC m=+139.063682122" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.748069 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.779209 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" podStartSLOduration=120.779188732 podStartE2EDuration="2m0.779188732s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:00.759729756 +0000 UTC m=+139.082972983" watchObservedRunningTime="2025-11-28 11:09:00.779188732 +0000 UTC m=+139.102431959" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.779800 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:00 crc kubenswrapper[4772]: E1128 11:09:00.780924 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:01.280908786 +0000 UTC m=+139.604152013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.790945 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9j8qq"] Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.839412 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dttbx" podStartSLOduration=119.839396328 podStartE2EDuration="1m59.839396328s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:00.83596951 +0000 UTC m=+139.159212737" watchObservedRunningTime="2025-11-28 11:09:00.839396328 +0000 UTC m=+139.162639555" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.881345 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3108fd0f-46a1-45ab-b911-7a35f90a9a35-catalog-content\") pod \"redhat-marketplace-9j8qq\" (UID: \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\") " pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.881457 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hdqr\" (UniqueName: \"kubernetes.io/projected/3108fd0f-46a1-45ab-b911-7a35f90a9a35-kube-api-access-4hdqr\") pod \"redhat-marketplace-9j8qq\" (UID: \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\") " pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.881521 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.881543 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3108fd0f-46a1-45ab-b911-7a35f90a9a35-utilities\") pod \"redhat-marketplace-9j8qq\" (UID: \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\") " pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:09:00 crc kubenswrapper[4772]: E1128 11:09:00.886758 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:01.386739957 +0000 UTC m=+139.709983224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.983714 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.983958 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3108fd0f-46a1-45ab-b911-7a35f90a9a35-catalog-content\") pod \"redhat-marketplace-9j8qq\" (UID: \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\") " pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.983994 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hdqr\" (UniqueName: \"kubernetes.io/projected/3108fd0f-46a1-45ab-b911-7a35f90a9a35-kube-api-access-4hdqr\") pod \"redhat-marketplace-9j8qq\" (UID: \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\") " pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.984027 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3108fd0f-46a1-45ab-b911-7a35f90a9a35-utilities\") pod \"redhat-marketplace-9j8qq\" (UID: \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\") " pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.984611 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3108fd0f-46a1-45ab-b911-7a35f90a9a35-utilities\") pod \"redhat-marketplace-9j8qq\" (UID: \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\") " pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:09:00 crc kubenswrapper[4772]: E1128 11:09:00.984674 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:01.484660638 +0000 UTC m=+139.807903865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:00 crc kubenswrapper[4772]: I1128 11:09:00.984866 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3108fd0f-46a1-45ab-b911-7a35f90a9a35-catalog-content\") pod \"redhat-marketplace-9j8qq\" (UID: \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\") " pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.014920 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6gqv"] Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.015813 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wh4vl" podStartSLOduration=120.015796194 podStartE2EDuration="2m0.015796194s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:01.015086791 +0000 UTC m=+139.338330028" watchObservedRunningTime="2025-11-28 11:09:01.015796194 +0000 UTC m=+139.339039431" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.064620 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hdqr\" (UniqueName: \"kubernetes.io/projected/3108fd0f-46a1-45ab-b911-7a35f90a9a35-kube-api-access-4hdqr\") pod \"redhat-marketplace-9j8qq\" (UID: \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\") " pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.091832 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:01 crc kubenswrapper[4772]: E1128 11:09:01.092160 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:01.592149042 +0000 UTC m=+139.915392269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.103591 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nbzf2"] Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.113750 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.121555 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bqx72" podStartSLOduration=8.121538952 podStartE2EDuration="8.121538952s" podCreationTimestamp="2025-11-28 11:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:01.118020961 +0000 UTC m=+139.441264198" watchObservedRunningTime="2025-11-28 11:09:01.121538952 +0000 UTC m=+139.444782179" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.209032 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:01 crc kubenswrapper[4772]: E1128 11:09:01.210062 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:01.710047785 +0000 UTC m=+140.033291012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.237666 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.248439 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbzf2"] Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.266593 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d6mbw"] Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.314500 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l78f\" (UniqueName: \"kubernetes.io/projected/e3a7107f-67af-4f5a-a863-a5a39bf589e2-kube-api-access-5l78f\") pod \"redhat-marketplace-nbzf2\" (UID: \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\") " pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.314542 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a7107f-67af-4f5a-a863-a5a39bf589e2-utilities\") pod \"redhat-marketplace-nbzf2\" (UID: \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\") " pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.314602 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.314637 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a7107f-67af-4f5a-a863-a5a39bf589e2-catalog-content\") pod \"redhat-marketplace-nbzf2\" (UID: \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\") " pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:09:01 crc kubenswrapper[4772]: E1128 11:09:01.314913 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:01.814901845 +0000 UTC m=+140.138145072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.415173 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.415387 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a7107f-67af-4f5a-a863-a5a39bf589e2-catalog-content\") pod \"redhat-marketplace-nbzf2\" (UID: \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\") " pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.415439 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l78f\" (UniqueName: \"kubernetes.io/projected/e3a7107f-67af-4f5a-a863-a5a39bf589e2-kube-api-access-5l78f\") pod \"redhat-marketplace-nbzf2\" (UID: \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\") " pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.415467 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a7107f-67af-4f5a-a863-a5a39bf589e2-utilities\") pod \"redhat-marketplace-nbzf2\" (UID: \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\") " pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.415874 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a7107f-67af-4f5a-a863-a5a39bf589e2-utilities\") pod \"redhat-marketplace-nbzf2\" (UID: \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\") " pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:09:01 crc kubenswrapper[4772]: E1128 11:09:01.416201 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:01.916187832 +0000 UTC m=+140.239431059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.416457 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a7107f-67af-4f5a-a863-a5a39bf589e2-catalog-content\") pod \"redhat-marketplace-nbzf2\" (UID: \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\") " pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.496555 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l78f\" (UniqueName: \"kubernetes.io/projected/e3a7107f-67af-4f5a-a863-a5a39bf589e2-kube-api-access-5l78f\") pod \"redhat-marketplace-nbzf2\" (UID: \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\") " pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.527270 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:01 crc kubenswrapper[4772]: E1128 11:09:01.527676 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:02.027663852 +0000 UTC m=+140.350907079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.611699 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4b2ps"] Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.628089 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:01 crc kubenswrapper[4772]: E1128 11:09:01.628565 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:02.128550246 +0000 UTC m=+140.451793463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.665898 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:01 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:01 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:01 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.666597 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.731241 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:01 crc kubenswrapper[4772]: E1128 11:09:01.731588 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:02.231576198 +0000 UTC m=+140.554819425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.773919 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sps4h" event={"ID":"a5b41370-17d7-4d6d-b93b-7d10b6403e53","Type":"ContainerStarted","Data":"0122b80cf64c691b1f82bfe267d7ea500ecc120f7f15d33dd297eced52fbd8bc"} Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.780809 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.814327 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6mbw" event={"ID":"fa914cfd-35f4-469e-a762-bc1dccea9f23","Type":"ContainerStarted","Data":"cece4cc62fcba71f15985254690a866e1921c6914f3454e846e295b055dd6844"} Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.815465 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pv78h" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.821894 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" event={"ID":"b36f0f60-82ce-49f5-b23b-2f585244e8db","Type":"ContainerStarted","Data":"1d9f46f9820bc2533a02aedfeda248320e51dbe6be1d17bd9576aaeab7dad8c3"} Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.821930 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" event={"ID":"b36f0f60-82ce-49f5-b23b-2f585244e8db","Type":"ContainerStarted","Data":"a518a2efbd5a8ef64f44284251060e5a3515459795a27fca0d54d649036bbac9"} Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.831932 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.833168 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkfw4" event={"ID":"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8","Type":"ContainerStarted","Data":"fbb2b792a6a7d0e4bb8a0b27ccaf7b24bdf13850daa8e6e951529f5762fe5c7a"} Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.833203 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkfw4" event={"ID":"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8","Type":"ContainerStarted","Data":"3747b3753f674d2d13c6fc3d8f14a8f9055742d1eb98592e057050a4b638a005"} Nov 28 11:09:01 crc kubenswrapper[4772]: E1128 11:09:01.833678 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:02.333655941 +0000 UTC m=+140.656899168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.834329 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:01 crc kubenswrapper[4772]: E1128 11:09:01.835002 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:02.334994223 +0000 UTC m=+140.658237450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.872055 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" event={"ID":"a84795d5-4c6b-49d8-b2a8-fe8a086bcf52","Type":"ContainerStarted","Data":"5a87af4ecd830fc3127603a8244c77b78d3cfc816ea5ca3c0d5b9abaad16ef70"} Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.873119 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.909602 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mt9sz"] Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.910073 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.910708 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.912678 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k8l6n" podStartSLOduration=121.912663632 podStartE2EDuration="2m1.912663632s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:01.903247114 +0000 UTC m=+140.226490341" watchObservedRunningTime="2025-11-28 11:09:01.912663632 +0000 UTC m=+140.235906859" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.913934 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.959596 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:01 crc kubenswrapper[4772]: E1128 11:09:01.959981 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:02.45995151 +0000 UTC m=+140.783194737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.960183 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0eee58e-18fa-496f-b10f-6e590c7e39c8-utilities\") pod \"redhat-operators-mt9sz\" (UID: \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\") " pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.960246 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvhsn\" (UniqueName: \"kubernetes.io/projected/f0eee58e-18fa-496f-b10f-6e590c7e39c8-kube-api-access-jvhsn\") pod \"redhat-operators-mt9sz\" (UID: \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\") " pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.960432 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0eee58e-18fa-496f-b10f-6e590c7e39c8-catalog-content\") pod \"redhat-operators-mt9sz\" (UID: \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\") " pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.960482 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:01 crc kubenswrapper[4772]: E1128 11:09:01.960805 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:02.460797256 +0000 UTC m=+140.784040483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.974908 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mt9sz"] Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.982817 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n9b9t" event={"ID":"e08207c7-a494-42f9-b59a-946ca0134a24","Type":"ContainerStarted","Data":"6cd1770f4b00e0f15075da902090a74bacd23cc812cd21dae8f1105374347a41"} Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.983601 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n9b9t" event={"ID":"e08207c7-a494-42f9-b59a-946ca0134a24","Type":"ContainerStarted","Data":"b971ad1186dd3c15d156fee936dc1fcadc1f9f3771ad7f831682aa4e049a2baa"} Nov 28 11:09:01 crc kubenswrapper[4772]: I1128 11:09:01.984809 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" event={"ID":"24396734-e237-4fb3-9cae-8c08db3a9122","Type":"ContainerStarted","Data":"3f5eb90ffa04d24c387d4387b20468ba508a0465e449bba41273179694d6dd3d"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.050423 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" podStartSLOduration=121.050395263 podStartE2EDuration="2m1.050395263s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:02.005664357 +0000 UTC m=+140.328907584" watchObservedRunningTime="2025-11-28 11:09:02.050395263 +0000 UTC m=+140.373638490" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.106236 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.106717 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0eee58e-18fa-496f-b10f-6e590c7e39c8-utilities\") pod \"redhat-operators-mt9sz\" (UID: \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\") " pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.106744 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvhsn\" (UniqueName: \"kubernetes.io/projected/f0eee58e-18fa-496f-b10f-6e590c7e39c8-kube-api-access-jvhsn\") pod \"redhat-operators-mt9sz\" (UID: \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\") " pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.106820 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0eee58e-18fa-496f-b10f-6e590c7e39c8-catalog-content\") pod \"redhat-operators-mt9sz\" (UID: \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\") " pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:09:02 crc kubenswrapper[4772]: E1128 11:09:02.107579 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:02.607556943 +0000 UTC m=+140.930800180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.107956 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0eee58e-18fa-496f-b10f-6e590c7e39c8-utilities\") pod \"redhat-operators-mt9sz\" (UID: \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\") " pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.110964 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0eee58e-18fa-496f-b10f-6e590c7e39c8-catalog-content\") pod \"redhat-operators-mt9sz\" (UID: \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\") " pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.140311 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" event={"ID":"f1ac9e8a-e68c-4d22-8841-3f36863b0574","Type":"ContainerStarted","Data":"a78b20f81c3677478e6e609d31698aa6a5c0a710514d6a8bd2808a9c0362cce6"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.140343 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gqv" event={"ID":"7b72c082-4b2a-4357-b95c-51d456028f86","Type":"ContainerStarted","Data":"1244a01a39a35bebf42ebd20d871f94d7985a483492b719e1b419e79b8e75376"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.140375 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" event={"ID":"ee7591a8-5999-4c4f-a0c8-de4fc90ac59d","Type":"ContainerStarted","Data":"5d6f39d070587d0613d942e1163a5dbfd0e010f6dbbf8ac986e5e9bcaaa169e1"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.152650 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4b2ps" event={"ID":"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8","Type":"ContainerStarted","Data":"4289cbbc212f5b797765e01fec47e7450896e854fdb77602c1b333df330a7881"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.170600 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvhsn\" (UniqueName: \"kubernetes.io/projected/f0eee58e-18fa-496f-b10f-6e590c7e39c8-kube-api-access-jvhsn\") pod \"redhat-operators-mt9sz\" (UID: \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\") " pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.191767 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" event={"ID":"d3999e53-dcc7-4e16-8463-de30fb7fcbf6","Type":"ContainerStarted","Data":"bc67825178dcc856fa7343c0d2ff77d8ec5a82a9e8d5e19dcb09bd40ec8b1075"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.212645 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dg24" event={"ID":"048a22d4-da9f-4fc6-837a-c5398965a0f0","Type":"ContainerStarted","Data":"49dea7618108566af4a28a5f84bc4f94c5bd7f3de6e795050b4aac6650ba8cf0"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.213201 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:02 crc kubenswrapper[4772]: E1128 11:09:02.216558 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:02.716538824 +0000 UTC m=+141.039782131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.253196 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-prq4h" event={"ID":"5514e80d-3def-4db9-90cc-67918bfa211a","Type":"ContainerStarted","Data":"fe36dd7a678890df4545166979e83321cee6082c68c3085791886d01748aae82"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.253239 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-prq4h" event={"ID":"5514e80d-3def-4db9-90cc-67918bfa211a","Type":"ContainerStarted","Data":"1b63893a57f54f8303d1f5ac3f51af8041161028df67a47da2e3a91feba45708"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.253814 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-prq4h" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.254448 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9j8qq"] Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.266251 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" event={"ID":"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6","Type":"ContainerStarted","Data":"0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.314771 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:02 crc kubenswrapper[4772]: E1128 11:09:02.316298 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:02.816276792 +0000 UTC m=+141.139520029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.324093 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7864f"] Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.325260 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.325609 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7" event={"ID":"d827072f-b633-42a9-b6c0-1f515508f488","Type":"ContainerStarted","Data":"9af82326bf05e2568d256dc1e77e22bcb2343b0967ef8b5c3c0a7b834cf97b18"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.332641 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.365990 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-n9b9t" podStartSLOduration=122.365951825 podStartE2EDuration="2m2.365951825s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:02.346780858 +0000 UTC m=+140.670024085" watchObservedRunningTime="2025-11-28 11:09:02.365951825 +0000 UTC m=+140.689195052" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.367677 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7864f"] Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.399776 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bftgd" event={"ID":"a3ab6f91-ed22-4f1e-ad45-7532efc7ae1f","Type":"ContainerStarted","Data":"aa938dd6dc65c244244c80796576d51d999d6c10d64329634878df7003eb94db"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.412500 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" event={"ID":"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb","Type":"ContainerStarted","Data":"2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.413782 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.416091 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-catalog-content\") pod \"redhat-operators-7864f\" (UID: \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\") " pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.416219 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6qc6\" (UniqueName: \"kubernetes.io/projected/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-kube-api-access-b6qc6\") pod \"redhat-operators-7864f\" (UID: \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\") " pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.416247 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.416300 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-utilities\") pod \"redhat-operators-7864f\" (UID: \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\") " pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:02 crc kubenswrapper[4772]: E1128 11:09:02.417696 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:02.917679833 +0000 UTC m=+141.240923130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.422588 4772 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rnhdf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.422641 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" podUID="4bf0b541-25cb-4873-b2e2-a9466dfb4ccb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.430624 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" event={"ID":"f18077e0-f3d3-4327-a384-321e3d9b9f78","Type":"ContainerStarted","Data":"c8c64392d159ae74f0230c2a1d529919f8b7434b37de6536d21146168e9363c5"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.452826 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" event={"ID":"7a4baf8b-1e57-43cb-972a-b3ad73d3a192","Type":"ContainerStarted","Data":"49861df3eb28c9fc073ae30f16ac3c9f35caa01b911fdc9d0de7256d9e50ba53"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.452891 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" event={"ID":"7a4baf8b-1e57-43cb-972a-b3ad73d3a192","Type":"ContainerStarted","Data":"8a65e00812ccef19c531d8b1af65cecfbcd4cef66cf2efdf77679544fd02212e"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.465539 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" event={"ID":"8903d4eb-b874-4abf-86b6-e44e7bd951b0","Type":"ContainerStarted","Data":"a8fc4dcfc61604652c4a9c047cb64ca4b2c3717f3718238ae2fc175070ab2744"} Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.468992 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-wwxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.469047 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wwxmx" podUID="9078e378-b549-4e5e-82ee-b9a96cf8e4da" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.502619 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2lkxn" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.516842 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.517397 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-utilities\") pod \"redhat-operators-7864f\" (UID: \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\") " pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.517530 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-catalog-content\") pod \"redhat-operators-7864f\" (UID: \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\") " pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.517991 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6qc6\" (UniqueName: \"kubernetes.io/projected/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-kube-api-access-b6qc6\") pod \"redhat-operators-7864f\" (UID: \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\") " pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:02 crc kubenswrapper[4772]: E1128 11:09:02.522535 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:03.022512463 +0000 UTC m=+141.345755700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.555216 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-catalog-content\") pod \"redhat-operators-7864f\" (UID: \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\") " pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.555937 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-utilities\") pod \"redhat-operators-7864f\" (UID: \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\") " pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.619812 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g6nz6" podStartSLOduration=122.619793853 podStartE2EDuration="2m2.619793853s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:02.602478335 +0000 UTC m=+140.925721582" watchObservedRunningTime="2025-11-28 11:09:02.619793853 +0000 UTC m=+140.943037080" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.624571 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:02 crc kubenswrapper[4772]: E1128 11:09:02.624957 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:03.124943006 +0000 UTC m=+141.448186233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.659265 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6qc6\" (UniqueName: \"kubernetes.io/projected/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-kube-api-access-b6qc6\") pod \"redhat-operators-7864f\" (UID: \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\") " pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.664174 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-bftgd" podStartSLOduration=121.664151827 podStartE2EDuration="2m1.664151827s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:02.657548998 +0000 UTC m=+140.980792225" watchObservedRunningTime="2025-11-28 11:09:02.664151827 +0000 UTC m=+140.987395054" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.667897 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:02 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:02 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:02 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.667952 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.699903 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.730793 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:02 crc kubenswrapper[4772]: E1128 11:09:02.731150 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:03.231133408 +0000 UTC m=+141.554376635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.740640 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-prq4h" podStartSLOduration=9.740626519 podStartE2EDuration="9.740626519s" podCreationTimestamp="2025-11-28 11:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:02.722046271 +0000 UTC m=+141.045289498" watchObservedRunningTime="2025-11-28 11:09:02.740626519 +0000 UTC m=+141.063869746" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.835378 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:02 crc kubenswrapper[4772]: E1128 11:09:02.836067 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:03.336052011 +0000 UTC m=+141.659295238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.844732 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" podStartSLOduration=122.844709495 podStartE2EDuration="2m2.844709495s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:02.829957068 +0000 UTC m=+141.153200295" watchObservedRunningTime="2025-11-28 11:09:02.844709495 +0000 UTC m=+141.167952722" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.862310 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bspm" podStartSLOduration=121.862293601 podStartE2EDuration="2m1.862293601s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:02.861029171 +0000 UTC m=+141.184272398" watchObservedRunningTime="2025-11-28 11:09:02.862293601 +0000 UTC m=+141.185536828" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.891447 4772 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j7mqc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.891527 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" podUID="a84795d5-4c6b-49d8-b2a8-fe8a086bcf52" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.896341 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbzf2"] Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.932244 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs2tk" podStartSLOduration=122.932228866 podStartE2EDuration="2m2.932228866s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:02.918751869 +0000 UTC m=+141.241995086" watchObservedRunningTime="2025-11-28 11:09:02.932228866 +0000 UTC m=+141.255472093" Nov 28 11:09:02 crc kubenswrapper[4772]: I1128 11:09:02.936035 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:02 crc kubenswrapper[4772]: E1128 11:09:02.936487 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:03.4364691 +0000 UTC m=+141.759712327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:02 crc kubenswrapper[4772]: E1128 11:09:02.939788 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3108fd0f_46a1_45ab_b911_7a35f90a9a35.slice/crio-conmon-3ea7f434be1a1307e409a26a5a27083a2ffe245e213178d8a0ffa571ff744fab.scope\": RecentStats: unable to find data in memory cache]" Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.003878 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r2cqm" podStartSLOduration=122.003861374 podStartE2EDuration="2m2.003861374s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:03.000788587 +0000 UTC m=+141.324031824" watchObservedRunningTime="2025-11-28 11:09:03.003861374 +0000 UTC m=+141.327104601" Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.038709 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.038964 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dg24" podStartSLOduration=122.038948235 podStartE2EDuration="2m2.038948235s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:03.037932703 +0000 UTC m=+141.361175920" watchObservedRunningTime="2025-11-28 11:09:03.038948235 +0000 UTC m=+141.362191462" Nov 28 11:09:03 crc kubenswrapper[4772]: E1128 11:09:03.039047 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:03.539035688 +0000 UTC m=+141.862278915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.075716 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cltd7" podStartSLOduration=123.075700069 podStartE2EDuration="2m3.075700069s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:03.074691237 +0000 UTC m=+141.397934464" watchObservedRunningTime="2025-11-28 11:09:03.075700069 +0000 UTC m=+141.398943296" Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.141939 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:03 crc kubenswrapper[4772]: E1128 11:09:03.142380 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:03.642344809 +0000 UTC m=+141.965588036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.142460 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:03 crc kubenswrapper[4772]: E1128 11:09:03.142753 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:03.642745542 +0000 UTC m=+141.965988769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.145464 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" podStartSLOduration=122.145445147 podStartE2EDuration="2m2.145445147s" podCreationTimestamp="2025-11-28 11:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:03.140714617 +0000 UTC m=+141.463957844" watchObservedRunningTime="2025-11-28 11:09:03.145445147 +0000 UTC m=+141.468688374" Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.244735 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:03 crc kubenswrapper[4772]: E1128 11:09:03.245004 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:03.744967297 +0000 UTC m=+142.068210524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.245147 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:03 crc kubenswrapper[4772]: E1128 11:09:03.245550 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:03.745540806 +0000 UTC m=+142.068784033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.272441 4772 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vsrmr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.37:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.272503 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" podUID="fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.37:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.349952 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:03 crc kubenswrapper[4772]: E1128 11:09:03.350410 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:03.850374945 +0000 UTC m=+142.173618172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.451176 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:03 crc kubenswrapper[4772]: E1128 11:09:03.451788 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:03.951776516 +0000 UTC m=+142.275019733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.557880 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:03 crc kubenswrapper[4772]: E1128 11:09:03.558240 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:04.058225266 +0000 UTC m=+142.381468493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.582564 4772 generic.go:334] "Generic (PLEG): container finished" podID="fa914cfd-35f4-469e-a762-bc1dccea9f23" containerID="609e1ee42aacba3d7f2b6410484dafc2f02d110e42c00a9a6ed04087e043441f" exitCode=0 Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.582661 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6mbw" event={"ID":"fa914cfd-35f4-469e-a762-bc1dccea9f23","Type":"ContainerDied","Data":"609e1ee42aacba3d7f2b6410484dafc2f02d110e42c00a9a6ed04087e043441f"} Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.587880 4772 generic.go:334] "Generic (PLEG): container finished" podID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" containerID="fbb2b792a6a7d0e4bb8a0b27ccaf7b24bdf13850daa8e6e951529f5762fe5c7a" exitCode=0 Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.587952 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkfw4" event={"ID":"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8","Type":"ContainerDied","Data":"fbb2b792a6a7d0e4bb8a0b27ccaf7b24bdf13850daa8e6e951529f5762fe5c7a"} Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.600081 4772 generic.go:334] "Generic (PLEG): container finished" podID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" containerID="a21b84698a0116d24bb32d1e49e5f7f29ce0d25b839d6b58922fbb1ff426849f" exitCode=0 Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.600153 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbzf2" event={"ID":"e3a7107f-67af-4f5a-a863-a5a39bf589e2","Type":"ContainerDied","Data":"a21b84698a0116d24bb32d1e49e5f7f29ce0d25b839d6b58922fbb1ff426849f"} Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.600179 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbzf2" event={"ID":"e3a7107f-67af-4f5a-a863-a5a39bf589e2","Type":"ContainerStarted","Data":"87f67841fdd1adab3b98d8d50c57f57590c45991dc34db3107476bde19f1b2cc"} Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.605877 4772 generic.go:334] "Generic (PLEG): container finished" podID="7b72c082-4b2a-4357-b95c-51d456028f86" containerID="5ba92b670c233614bbc8e8efe45f2defadf0fa9c3ac4d3373780f0e83c3b019a" exitCode=0 Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.605942 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gqv" event={"ID":"7b72c082-4b2a-4357-b95c-51d456028f86","Type":"ContainerDied","Data":"5ba92b670c233614bbc8e8efe45f2defadf0fa9c3ac4d3373780f0e83c3b019a"} Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.630204 4772 generic.go:334] "Generic (PLEG): container finished" podID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" containerID="bbdb80703f22b497c51dd2c8ecf3dcf0e02b32c38e324a6b97602cac811b43d6" exitCode=0 Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.630326 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4b2ps" event={"ID":"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8","Type":"ContainerDied","Data":"bbdb80703f22b497c51dd2c8ecf3dcf0e02b32c38e324a6b97602cac811b43d6"} Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.632513 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mt9sz"] Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.647947 4772 generic.go:334] "Generic (PLEG): container finished" podID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" containerID="3ea7f434be1a1307e409a26a5a27083a2ffe245e213178d8a0ffa571ff744fab" exitCode=0 Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.648171 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9j8qq" event={"ID":"3108fd0f-46a1-45ab-b911-7a35f90a9a35","Type":"ContainerDied","Data":"3ea7f434be1a1307e409a26a5a27083a2ffe245e213178d8a0ffa571ff744fab"} Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.648240 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9j8qq" event={"ID":"3108fd0f-46a1-45ab-b911-7a35f90a9a35","Type":"ContainerStarted","Data":"7ceb282b8cc410cf346e5b6eb128935bf71b95278be4922509a3560b39700ab1"} Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.666247 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:03 crc kubenswrapper[4772]: E1128 11:09:03.666688 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:04.16667232 +0000 UTC m=+142.489915627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.675893 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:03 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:03 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:03 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.675957 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.679514 4772 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rnhdf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.679516 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-wwxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.680158 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" podUID="4bf0b541-25cb-4873-b2e2-a9466dfb4ccb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.681151 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wwxmx" podUID="9078e378-b549-4e5e-82ee-b9a96cf8e4da" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.704173 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7mqc" Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.737044 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7864f"] Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.768736 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:03 crc kubenswrapper[4772]: E1128 11:09:03.771076 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:04.271061246 +0000 UTC m=+142.594304473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.784841 4772 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.786829 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.873729 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:03 crc kubenswrapper[4772]: E1128 11:09:03.874109 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:04.374097358 +0000 UTC m=+142.697340585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:03 crc kubenswrapper[4772]: I1128 11:09:03.974688 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:03 crc kubenswrapper[4772]: E1128 11:09:03.975047 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:04.475030914 +0000 UTC m=+142.798274141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.075916 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:04 crc kubenswrapper[4772]: E1128 11:09:04.076340 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:09:04.576318501 +0000 UTC m=+142.899561728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pgb48" (UID: "01678d74-64f0-4bee-b900-6dd92b577842") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.184139 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:04 crc kubenswrapper[4772]: E1128 11:09:04.184673 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:09:04.684654172 +0000 UTC m=+143.007897399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.191063 4772 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-28T11:09:03.78508709Z","Handler":null,"Name":""} Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.248483 4772 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.248525 4772 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.285712 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.289760 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.289815 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.450691 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pgb48\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.488153 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.498920 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.499182 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.655939 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.660525 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:04 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:04 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:04 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.660581 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.663190 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.666067 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.673347 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.675141 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.713471 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37e74cda-9666-4ee6-9f17-e0771c8dbecb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"37e74cda-9666-4ee6-9f17-e0771c8dbecb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.713610 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37e74cda-9666-4ee6-9f17-e0771c8dbecb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"37e74cda-9666-4ee6-9f17-e0771c8dbecb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.736336 4772 generic.go:334] "Generic (PLEG): container finished" podID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" containerID="8f4fde0848c43a02f27937326e517fb2bb47704141488165aec07bd3294816ff" exitCode=0 Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.736466 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt9sz" event={"ID":"f0eee58e-18fa-496f-b10f-6e590c7e39c8","Type":"ContainerDied","Data":"8f4fde0848c43a02f27937326e517fb2bb47704141488165aec07bd3294816ff"} Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.736496 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt9sz" event={"ID":"f0eee58e-18fa-496f-b10f-6e590c7e39c8","Type":"ContainerStarted","Data":"e7bf65ac6860990cacce97e6dfb2acd156b915a3a8c4bfaa49f791f50bc70746"} Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.744968 4772 generic.go:334] "Generic (PLEG): container finished" podID="24396734-e237-4fb3-9cae-8c08db3a9122" containerID="3f5eb90ffa04d24c387d4387b20468ba508a0465e449bba41273179694d6dd3d" exitCode=0 Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.745056 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" event={"ID":"24396734-e237-4fb3-9cae-8c08db3a9122","Type":"ContainerDied","Data":"3f5eb90ffa04d24c387d4387b20468ba508a0465e449bba41273179694d6dd3d"} Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.754516 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sps4h" event={"ID":"a5b41370-17d7-4d6d-b93b-7d10b6403e53","Type":"ContainerStarted","Data":"96947987d3e4b046b408ddbc75361c4d74aa33bd438803f9225d2bb9764c9e39"} Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.754554 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sps4h" event={"ID":"a5b41370-17d7-4d6d-b93b-7d10b6403e53","Type":"ContainerStarted","Data":"ff46da2555857930dba03f64868e12e43b5e369afc8504f8f637a263dd0b05cd"} Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.767820 4772 generic.go:334] "Generic (PLEG): container finished" podID="200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" containerID="8e4f6e48fb38a480ff4139e3a35e876812bc9ed3bede5d32d6561f40343979d1" exitCode=0 Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.769587 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7864f" event={"ID":"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a","Type":"ContainerDied","Data":"8e4f6e48fb38a480ff4139e3a35e876812bc9ed3bede5d32d6561f40343979d1"} Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.769612 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7864f" event={"ID":"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a","Type":"ContainerStarted","Data":"831b6e5e82aea83a682e5f6cdfa691cb974ac036ae276b6402cd077cd1ee9521"} Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.776587 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.824144 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37e74cda-9666-4ee6-9f17-e0771c8dbecb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"37e74cda-9666-4ee6-9f17-e0771c8dbecb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.825766 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37e74cda-9666-4ee6-9f17-e0771c8dbecb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"37e74cda-9666-4ee6-9f17-e0771c8dbecb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.826165 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37e74cda-9666-4ee6-9f17-e0771c8dbecb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"37e74cda-9666-4ee6-9f17-e0771c8dbecb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:09:04 crc kubenswrapper[4772]: I1128 11:09:04.851172 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37e74cda-9666-4ee6-9f17-e0771c8dbecb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"37e74cda-9666-4ee6-9f17-e0771c8dbecb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.004538 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pgb48"] Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.012586 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.559531 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 11:09:05 crc kubenswrapper[4772]: W1128 11:09:05.584061 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod37e74cda_9666_4ee6_9f17_e0771c8dbecb.slice/crio-a4854c9c9218f03bae2d3b9f351d35727fd72ec1f1fedc1142fb30a6382a2ced WatchSource:0}: Error finding container a4854c9c9218f03bae2d3b9f351d35727fd72ec1f1fedc1142fb30a6382a2ced: Status 404 returned error can't find the container with id a4854c9c9218f03bae2d3b9f351d35727fd72ec1f1fedc1142fb30a6382a2ced Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.586392 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.586429 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.609611 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.673611 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.677094 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:05 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:05 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:05 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.677161 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.779533 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" event={"ID":"01678d74-64f0-4bee-b900-6dd92b577842","Type":"ContainerStarted","Data":"32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3"} Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.779573 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" event={"ID":"01678d74-64f0-4bee-b900-6dd92b577842","Type":"ContainerStarted","Data":"a0ee6e411006de20c74f36ba49954caf9b126a7e373a9bb6752c61daab5be368"} Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.780487 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.801792 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sps4h" event={"ID":"a5b41370-17d7-4d6d-b93b-7d10b6403e53","Type":"ContainerStarted","Data":"e039fa995fcc19f91b40785a9a29f78a0951b5a95b3d151664cb6be8dc4be98e"} Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.807130 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" podStartSLOduration=125.807116306 podStartE2EDuration="2m5.807116306s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:05.806132434 +0000 UTC m=+144.129375661" watchObservedRunningTime="2025-11-28 11:09:05.807116306 +0000 UTC m=+144.130359533" Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.813426 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"37e74cda-9666-4ee6-9f17-e0771c8dbecb","Type":"ContainerStarted","Data":"a4854c9c9218f03bae2d3b9f351d35727fd72ec1f1fedc1142fb30a6382a2ced"} Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.817929 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4dx8r" Nov 28 11:09:05 crc kubenswrapper[4772]: I1128 11:09:05.834023 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-sps4h" podStartSLOduration=12.834007017 podStartE2EDuration="12.834007017s" podCreationTimestamp="2025-11-28 11:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:05.83314163 +0000 UTC m=+144.156384857" watchObservedRunningTime="2025-11-28 11:09:05.834007017 +0000 UTC m=+144.157250234" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.018072 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.213729 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-wwxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.213735 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-wwxmx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.213786 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wwxmx" podUID="9078e378-b549-4e5e-82ee-b9a96cf8e4da" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.213794 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wwxmx" podUID="9078e378-b549-4e5e-82ee-b9a96cf8e4da" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.235688 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.361150 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shp67\" (UniqueName: \"kubernetes.io/projected/24396734-e237-4fb3-9cae-8c08db3a9122-kube-api-access-shp67\") pod \"24396734-e237-4fb3-9cae-8c08db3a9122\" (UID: \"24396734-e237-4fb3-9cae-8c08db3a9122\") " Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.361502 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24396734-e237-4fb3-9cae-8c08db3a9122-secret-volume\") pod \"24396734-e237-4fb3-9cae-8c08db3a9122\" (UID: \"24396734-e237-4fb3-9cae-8c08db3a9122\") " Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.361600 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24396734-e237-4fb3-9cae-8c08db3a9122-config-volume\") pod \"24396734-e237-4fb3-9cae-8c08db3a9122\" (UID: \"24396734-e237-4fb3-9cae-8c08db3a9122\") " Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.363224 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24396734-e237-4fb3-9cae-8c08db3a9122-config-volume" (OuterVolumeSpecName: "config-volume") pod "24396734-e237-4fb3-9cae-8c08db3a9122" (UID: "24396734-e237-4fb3-9cae-8c08db3a9122"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.369719 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24396734-e237-4fb3-9cae-8c08db3a9122-kube-api-access-shp67" (OuterVolumeSpecName: "kube-api-access-shp67") pod "24396734-e237-4fb3-9cae-8c08db3a9122" (UID: "24396734-e237-4fb3-9cae-8c08db3a9122"). InnerVolumeSpecName "kube-api-access-shp67". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.370040 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24396734-e237-4fb3-9cae-8c08db3a9122-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24396734-e237-4fb3-9cae-8c08db3a9122" (UID: "24396734-e237-4fb3-9cae-8c08db3a9122"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.463881 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24396734-e237-4fb3-9cae-8c08db3a9122-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.463912 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shp67\" (UniqueName: \"kubernetes.io/projected/24396734-e237-4fb3-9cae-8c08db3a9122-kube-api-access-shp67\") on node \"crc\" DevicePath \"\"" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.464310 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24396734-e237-4fb3-9cae-8c08db3a9122-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.500654 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.500702 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.503841 4772 patch_prober.go:28] interesting pod/console-f9d7485db-xffg6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.503892 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xffg6" podUID="7c2f8de4-e5c0-493a-b16f-b415832ba9bd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.657667 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:06 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:06 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:06 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.657721 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.835336 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" event={"ID":"24396734-e237-4fb3-9cae-8c08db3a9122","Type":"ContainerDied","Data":"44eff93824f19e901dd8c23110fc8900373cc83a04c249c949a3b963839b5b77"} Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.835400 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44eff93824f19e901dd8c23110fc8900373cc83a04c249c949a3b963839b5b77" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.835449 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf" Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.848256 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"37e74cda-9666-4ee6-9f17-e0771c8dbecb","Type":"ContainerStarted","Data":"30e3a3ee4f42e42e34dc65cc5e0a23a0c54e778fd79950e15a23db754e89aa36"} Nov 28 11:09:06 crc kubenswrapper[4772]: I1128 11:09:06.869824 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.869800094 podStartE2EDuration="2.869800094s" podCreationTimestamp="2025-11-28 11:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:06.862909005 +0000 UTC m=+145.186152232" watchObservedRunningTime="2025-11-28 11:09:06.869800094 +0000 UTC m=+145.193043321" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.042835 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 11:09:07 crc kubenswrapper[4772]: E1128 11:09:07.043107 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24396734-e237-4fb3-9cae-8c08db3a9122" containerName="collect-profiles" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.043118 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="24396734-e237-4fb3-9cae-8c08db3a9122" containerName="collect-profiles" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.043233 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="24396734-e237-4fb3-9cae-8c08db3a9122" containerName="collect-profiles" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.043637 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.046419 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.076682 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/544d0ab2-19fb-46d8-9180-2e5f22ed5889-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"544d0ab2-19fb-46d8-9180-2e5f22ed5889\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.076798 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/544d0ab2-19fb-46d8-9180-2e5f22ed5889-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"544d0ab2-19fb-46d8-9180-2e5f22ed5889\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.080928 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.081513 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.177775 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/544d0ab2-19fb-46d8-9180-2e5f22ed5889-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"544d0ab2-19fb-46d8-9180-2e5f22ed5889\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.177849 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/544d0ab2-19fb-46d8-9180-2e5f22ed5889-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"544d0ab2-19fb-46d8-9180-2e5f22ed5889\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.177935 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/544d0ab2-19fb-46d8-9180-2e5f22ed5889-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"544d0ab2-19fb-46d8-9180-2e5f22ed5889\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.194371 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/544d0ab2-19fb-46d8-9180-2e5f22ed5889-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"544d0ab2-19fb-46d8-9180-2e5f22ed5889\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.397347 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.665067 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:07 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:07 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:07 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.665130 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.875266 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.896042 4772 generic.go:334] "Generic (PLEG): container finished" podID="37e74cda-9666-4ee6-9f17-e0771c8dbecb" containerID="30e3a3ee4f42e42e34dc65cc5e0a23a0c54e778fd79950e15a23db754e89aa36" exitCode=0 Nov 28 11:09:07 crc kubenswrapper[4772]: I1128 11:09:07.897343 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"37e74cda-9666-4ee6-9f17-e0771c8dbecb","Type":"ContainerDied","Data":"30e3a3ee4f42e42e34dc65cc5e0a23a0c54e778fd79950e15a23db754e89aa36"} Nov 28 11:09:07 crc kubenswrapper[4772]: W1128 11:09:07.903641 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod544d0ab2_19fb_46d8_9180_2e5f22ed5889.slice/crio-3829b893c986bf50f1eaf6afac4336532a1c2c5b54849a7765ca36f59322e92d WatchSource:0}: Error finding container 3829b893c986bf50f1eaf6afac4336532a1c2c5b54849a7765ca36f59322e92d: Status 404 returned error can't find the container with id 3829b893c986bf50f1eaf6afac4336532a1c2c5b54849a7765ca36f59322e92d Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.094085 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.094191 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.094236 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.094283 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.101914 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.102737 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.103233 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.104896 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.335048 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.346553 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.356118 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.661194 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:08 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:08 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:08 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:08 crc kubenswrapper[4772]: I1128 11:09:08.661252 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:08 crc kubenswrapper[4772]: W1128 11:09:08.773726 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-cde82595bd46f41e83b9cec0f806faaf7f7dad9975abe7453acfffd99a627c69 WatchSource:0}: Error finding container cde82595bd46f41e83b9cec0f806faaf7f7dad9975abe7453acfffd99a627c69: Status 404 returned error can't find the container with id cde82595bd46f41e83b9cec0f806faaf7f7dad9975abe7453acfffd99a627c69 Nov 28 11:09:08 crc kubenswrapper[4772]: W1128 11:09:08.803282 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-944951a46d924a38f8d8fbc769e9a9174c3d748b4b30a3480af2ac49c37fafde WatchSource:0}: Error finding container 944951a46d924a38f8d8fbc769e9a9174c3d748b4b30a3480af2ac49c37fafde: Status 404 returned error can't find the container with id 944951a46d924a38f8d8fbc769e9a9174c3d748b4b30a3480af2ac49c37fafde Nov 28 11:09:08 crc kubenswrapper[4772]: W1128 11:09:08.959493 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-954c4217253cabcb869a389f960663b2cf49f1467cf9dd6ba57a78cd019c908b WatchSource:0}: Error finding container 954c4217253cabcb869a389f960663b2cf49f1467cf9dd6ba57a78cd019c908b: Status 404 returned error can't find the container with id 954c4217253cabcb869a389f960663b2cf49f1467cf9dd6ba57a78cd019c908b Nov 28 11:09:09 crc kubenswrapper[4772]: I1128 11:09:09.135627 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"544d0ab2-19fb-46d8-9180-2e5f22ed5889","Type":"ContainerStarted","Data":"3829b893c986bf50f1eaf6afac4336532a1c2c5b54849a7765ca36f59322e92d"} Nov 28 11:09:09 crc kubenswrapper[4772]: I1128 11:09:09.143645 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"944951a46d924a38f8d8fbc769e9a9174c3d748b4b30a3480af2ac49c37fafde"} Nov 28 11:09:09 crc kubenswrapper[4772]: I1128 11:09:09.158780 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cde82595bd46f41e83b9cec0f806faaf7f7dad9975abe7453acfffd99a627c69"} Nov 28 11:09:09 crc kubenswrapper[4772]: I1128 11:09:09.533477 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:09:09 crc kubenswrapper[4772]: I1128 11:09:09.618777 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37e74cda-9666-4ee6-9f17-e0771c8dbecb-kubelet-dir\") pod \"37e74cda-9666-4ee6-9f17-e0771c8dbecb\" (UID: \"37e74cda-9666-4ee6-9f17-e0771c8dbecb\") " Nov 28 11:09:09 crc kubenswrapper[4772]: I1128 11:09:09.618875 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37e74cda-9666-4ee6-9f17-e0771c8dbecb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "37e74cda-9666-4ee6-9f17-e0771c8dbecb" (UID: "37e74cda-9666-4ee6-9f17-e0771c8dbecb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:09:09 crc kubenswrapper[4772]: I1128 11:09:09.618920 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37e74cda-9666-4ee6-9f17-e0771c8dbecb-kube-api-access\") pod \"37e74cda-9666-4ee6-9f17-e0771c8dbecb\" (UID: \"37e74cda-9666-4ee6-9f17-e0771c8dbecb\") " Nov 28 11:09:09 crc kubenswrapper[4772]: I1128 11:09:09.619639 4772 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37e74cda-9666-4ee6-9f17-e0771c8dbecb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:09:09 crc kubenswrapper[4772]: I1128 11:09:09.637440 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e74cda-9666-4ee6-9f17-e0771c8dbecb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "37e74cda-9666-4ee6-9f17-e0771c8dbecb" (UID: "37e74cda-9666-4ee6-9f17-e0771c8dbecb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:09:09 crc kubenswrapper[4772]: I1128 11:09:09.656641 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:09 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:09 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:09 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:09 crc kubenswrapper[4772]: I1128 11:09:09.656738 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:09 crc kubenswrapper[4772]: I1128 11:09:09.721073 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37e74cda-9666-4ee6-9f17-e0771c8dbecb-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:09:10 crc kubenswrapper[4772]: I1128 11:09:10.171639 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2e7ff389b7bc2cd9768b81dc502e40983e83b75affdb33a3d7ae59f2b295c842"} Nov 28 11:09:10 crc kubenswrapper[4772]: I1128 11:09:10.177586 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"53c9d52168004755fff7a9b9691d03fb100972735352eccc3eb89bb043b6b56c"} Nov 28 11:09:10 crc kubenswrapper[4772]: I1128 11:09:10.177628 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"954c4217253cabcb869a389f960663b2cf49f1467cf9dd6ba57a78cd019c908b"} Nov 28 11:09:10 crc kubenswrapper[4772]: I1128 11:09:10.188043 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"544d0ab2-19fb-46d8-9180-2e5f22ed5889","Type":"ContainerStarted","Data":"1c09d24a9ff2e12cb3ffb15d48d633fa5c6212e975f96e3021764aaed3007756"} Nov 28 11:09:10 crc kubenswrapper[4772]: I1128 11:09:10.194793 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4591b776ec19db233ac71a8673b8e138fd0b2f8ef0e08d3f21c47dd843c5e276"} Nov 28 11:09:10 crc kubenswrapper[4772]: I1128 11:09:10.194863 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:09:10 crc kubenswrapper[4772]: I1128 11:09:10.197572 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"37e74cda-9666-4ee6-9f17-e0771c8dbecb","Type":"ContainerDied","Data":"a4854c9c9218f03bae2d3b9f351d35727fd72ec1f1fedc1142fb30a6382a2ced"} Nov 28 11:09:10 crc kubenswrapper[4772]: I1128 11:09:10.197607 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4854c9c9218f03bae2d3b9f351d35727fd72ec1f1fedc1142fb30a6382a2ced" Nov 28 11:09:10 crc kubenswrapper[4772]: I1128 11:09:10.197664 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:09:10 crc kubenswrapper[4772]: I1128 11:09:10.233749 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.233730319 podStartE2EDuration="3.233730319s" podCreationTimestamp="2025-11-28 11:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:10.225699665 +0000 UTC m=+148.548942892" watchObservedRunningTime="2025-11-28 11:09:10.233730319 +0000 UTC m=+148.556973546" Nov 28 11:09:10 crc kubenswrapper[4772]: I1128 11:09:10.669399 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:10 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:10 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:10 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:10 crc kubenswrapper[4772]: I1128 11:09:10.669552 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:11 crc kubenswrapper[4772]: I1128 11:09:11.211378 4772 generic.go:334] "Generic (PLEG): container finished" podID="544d0ab2-19fb-46d8-9180-2e5f22ed5889" containerID="1c09d24a9ff2e12cb3ffb15d48d633fa5c6212e975f96e3021764aaed3007756" exitCode=0 Nov 28 11:09:11 crc kubenswrapper[4772]: I1128 11:09:11.211447 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"544d0ab2-19fb-46d8-9180-2e5f22ed5889","Type":"ContainerDied","Data":"1c09d24a9ff2e12cb3ffb15d48d633fa5c6212e975f96e3021764aaed3007756"} Nov 28 11:09:11 crc kubenswrapper[4772]: I1128 11:09:11.587228 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-prq4h" Nov 28 11:09:11 crc kubenswrapper[4772]: I1128 11:09:11.658659 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:11 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:11 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:11 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:11 crc kubenswrapper[4772]: I1128 11:09:11.658711 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:12 crc kubenswrapper[4772]: I1128 11:09:12.656329 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:12 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:12 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:12 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:12 crc kubenswrapper[4772]: I1128 11:09:12.656416 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:13 crc kubenswrapper[4772]: I1128 11:09:13.655834 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:13 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:13 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:13 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:13 crc kubenswrapper[4772]: I1128 11:09:13.655893 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:14 crc kubenswrapper[4772]: I1128 11:09:14.655478 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:14 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 28 11:09:14 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:14 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:14 crc kubenswrapper[4772]: I1128 11:09:14.655540 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:15 crc kubenswrapper[4772]: I1128 11:09:15.657705 4772 patch_prober.go:28] interesting pod/router-default-5444994796-pjlzq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:09:15 crc kubenswrapper[4772]: [+]has-synced ok Nov 28 11:09:15 crc kubenswrapper[4772]: [+]process-running ok Nov 28 11:09:15 crc kubenswrapper[4772]: healthz check failed Nov 28 11:09:15 crc kubenswrapper[4772]: I1128 11:09:15.658180 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pjlzq" podUID="9f10a259-c46a-4325-8b94-133ebbc6041a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:09:16 crc kubenswrapper[4772]: I1128 11:09:16.213518 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-wwxmx" Nov 28 11:09:16 crc kubenswrapper[4772]: I1128 11:09:16.500340 4772 patch_prober.go:28] interesting pod/console-f9d7485db-xffg6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 28 11:09:16 crc kubenswrapper[4772]: I1128 11:09:16.500413 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xffg6" podUID="7c2f8de4-e5c0-493a-b16f-b415832ba9bd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 28 11:09:16 crc kubenswrapper[4772]: I1128 11:09:16.656023 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:09:16 crc kubenswrapper[4772]: I1128 11:09:16.658572 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pjlzq" Nov 28 11:09:17 crc kubenswrapper[4772]: I1128 11:09:17.298524 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:09:19 crc kubenswrapper[4772]: I1128 11:09:19.062472 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:09:19 crc kubenswrapper[4772]: I1128 11:09:19.151048 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/544d0ab2-19fb-46d8-9180-2e5f22ed5889-kubelet-dir\") pod \"544d0ab2-19fb-46d8-9180-2e5f22ed5889\" (UID: \"544d0ab2-19fb-46d8-9180-2e5f22ed5889\") " Nov 28 11:09:19 crc kubenswrapper[4772]: I1128 11:09:19.151180 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/544d0ab2-19fb-46d8-9180-2e5f22ed5889-kube-api-access\") pod \"544d0ab2-19fb-46d8-9180-2e5f22ed5889\" (UID: \"544d0ab2-19fb-46d8-9180-2e5f22ed5889\") " Nov 28 11:09:19 crc kubenswrapper[4772]: I1128 11:09:19.151523 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/544d0ab2-19fb-46d8-9180-2e5f22ed5889-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "544d0ab2-19fb-46d8-9180-2e5f22ed5889" (UID: "544d0ab2-19fb-46d8-9180-2e5f22ed5889"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:09:19 crc kubenswrapper[4772]: I1128 11:09:19.159014 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544d0ab2-19fb-46d8-9180-2e5f22ed5889-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "544d0ab2-19fb-46d8-9180-2e5f22ed5889" (UID: "544d0ab2-19fb-46d8-9180-2e5f22ed5889"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:09:19 crc kubenswrapper[4772]: I1128 11:09:19.253769 4772 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/544d0ab2-19fb-46d8-9180-2e5f22ed5889-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:09:19 crc kubenswrapper[4772]: I1128 11:09:19.253798 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/544d0ab2-19fb-46d8-9180-2e5f22ed5889-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:09:19 crc kubenswrapper[4772]: I1128 11:09:19.288224 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"544d0ab2-19fb-46d8-9180-2e5f22ed5889","Type":"ContainerDied","Data":"3829b893c986bf50f1eaf6afac4336532a1c2c5b54849a7765ca36f59322e92d"} Nov 28 11:09:19 crc kubenswrapper[4772]: I1128 11:09:19.288264 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3829b893c986bf50f1eaf6afac4336532a1c2c5b54849a7765ca36f59322e92d" Nov 28 11:09:19 crc kubenswrapper[4772]: I1128 11:09:19.288276 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:09:23 crc kubenswrapper[4772]: I1128 11:09:23.709288 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:09:23 crc kubenswrapper[4772]: I1128 11:09:23.731642 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/def9b3ab-2dc8-4f40-9d6b-346f9cdbc386-metrics-certs\") pod \"network-metrics-daemon-qstr6\" (UID: \"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386\") " pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:09:23 crc kubenswrapper[4772]: I1128 11:09:23.897000 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:09:23 crc kubenswrapper[4772]: I1128 11:09:23.897064 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:09:23 crc kubenswrapper[4772]: I1128 11:09:23.921983 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qstr6" Nov 28 11:09:24 crc kubenswrapper[4772]: I1128 11:09:24.504230 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:09:26 crc kubenswrapper[4772]: I1128 11:09:26.503349 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:09:26 crc kubenswrapper[4772]: I1128 11:09:26.512311 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:09:35 crc kubenswrapper[4772]: E1128 11:09:35.150450 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 11:09:35 crc kubenswrapper[4772]: E1128 11:09:35.151042 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvhsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mt9sz_openshift-marketplace(f0eee58e-18fa-496f-b10f-6e590c7e39c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:09:35 crc kubenswrapper[4772]: E1128 11:09:35.152420 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mt9sz" podUID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" Nov 28 11:09:36 crc kubenswrapper[4772]: I1128 11:09:36.179322 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mlfhh" Nov 28 11:09:41 crc kubenswrapper[4772]: E1128 11:09:41.392676 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 11:09:41 crc kubenswrapper[4772]: E1128 11:09:41.393176 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lz6lt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g6gqv_openshift-marketplace(7b72c082-4b2a-4357-b95c-51d456028f86): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:09:41 crc kubenswrapper[4772]: E1128 11:09:41.394369 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-g6gqv" podUID="7b72c082-4b2a-4357-b95c-51d456028f86" Nov 28 11:09:42 crc kubenswrapper[4772]: E1128 11:09:42.065131 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mt9sz" podUID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" Nov 28 11:09:42 crc kubenswrapper[4772]: E1128 11:09:42.130228 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 11:09:42 crc kubenswrapper[4772]: E1128 11:09:42.130530 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxbfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dkfw4_openshift-marketplace(bfe29c61-95d6-476a-b23c-9b66f5f1c5f8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:09:42 crc kubenswrapper[4772]: E1128 11:09:42.131826 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dkfw4" podUID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" Nov 28 11:09:42 crc kubenswrapper[4772]: E1128 11:09:42.982189 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dkfw4" podUID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" Nov 28 11:09:42 crc kubenswrapper[4772]: E1128 11:09:42.982214 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-g6gqv" podUID="7b72c082-4b2a-4357-b95c-51d456028f86" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.045015 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.045163 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4hdqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9j8qq_openshift-marketplace(3108fd0f-46a1-45ab-b911-7a35f90a9a35): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.046369 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9j8qq" podUID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.065092 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.065263 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5l78f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nbzf2_openshift-marketplace(e3a7107f-67af-4f5a-a863-a5a39bf589e2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.066514 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nbzf2" podUID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.103677 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.104238 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7p7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-d6mbw_openshift-marketplace(fa914cfd-35f4-469e-a762-bc1dccea9f23): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.104943 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.105102 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hbvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4b2ps_openshift-marketplace(d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.106335 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4b2ps" podUID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.109167 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-d6mbw" podUID="fa914cfd-35f4-469e-a762-bc1dccea9f23" Nov 28 11:09:43 crc kubenswrapper[4772]: I1128 11:09:43.203342 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qstr6"] Nov 28 11:09:43 crc kubenswrapper[4772]: W1128 11:09:43.209648 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddef9b3ab_2dc8_4f40_9d6b_346f9cdbc386.slice/crio-760f3fcee2de7b69e408b1688b7bcfcae9b5883deec838256dd2d71e84676787 WatchSource:0}: Error finding container 760f3fcee2de7b69e408b1688b7bcfcae9b5883deec838256dd2d71e84676787: Status 404 returned error can't find the container with id 760f3fcee2de7b69e408b1688b7bcfcae9b5883deec838256dd2d71e84676787 Nov 28 11:09:43 crc kubenswrapper[4772]: I1128 11:09:43.409944 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qstr6" event={"ID":"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386","Type":"ContainerStarted","Data":"760f3fcee2de7b69e408b1688b7bcfcae9b5883deec838256dd2d71e84676787"} Nov 28 11:09:43 crc kubenswrapper[4772]: I1128 11:09:43.415118 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7864f" event={"ID":"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a","Type":"ContainerStarted","Data":"cdb4664c09105b003b67f7365a42ce02ffc1670806b031522fa118d5917fa0dc"} Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.416862 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-d6mbw" podUID="fa914cfd-35f4-469e-a762-bc1dccea9f23" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.416888 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nbzf2" podUID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.417222 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4b2ps" podUID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" Nov 28 11:09:43 crc kubenswrapper[4772]: E1128 11:09:43.417275 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9j8qq" podUID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.421391 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qstr6" event={"ID":"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386","Type":"ContainerStarted","Data":"735eb628c6eea6e2e4ac0f9bb368f657d52848ac08011d025924ec4560a02d50"} Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.421680 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qstr6" event={"ID":"def9b3ab-2dc8-4f40-9d6b-346f9cdbc386","Type":"ContainerStarted","Data":"43b899746a7030a11ba20639f8f48e103928be97ac15cbd701ef191f4dfe8974"} Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.423470 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7864f" event={"ID":"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a","Type":"ContainerDied","Data":"cdb4664c09105b003b67f7365a42ce02ffc1670806b031522fa118d5917fa0dc"} Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.423383 4772 generic.go:334] "Generic (PLEG): container finished" podID="200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" containerID="cdb4664c09105b003b67f7365a42ce02ffc1670806b031522fa118d5917fa0dc" exitCode=0 Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.435599 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 11:09:44 crc kubenswrapper[4772]: E1128 11:09:44.435793 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544d0ab2-19fb-46d8-9180-2e5f22ed5889" containerName="pruner" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.435804 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="544d0ab2-19fb-46d8-9180-2e5f22ed5889" containerName="pruner" Nov 28 11:09:44 crc kubenswrapper[4772]: E1128 11:09:44.435836 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e74cda-9666-4ee6-9f17-e0771c8dbecb" containerName="pruner" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.435842 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e74cda-9666-4ee6-9f17-e0771c8dbecb" containerName="pruner" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.435944 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e74cda-9666-4ee6-9f17-e0771c8dbecb" containerName="pruner" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.435957 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="544d0ab2-19fb-46d8-9180-2e5f22ed5889" containerName="pruner" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.436305 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.439727 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.440044 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.441251 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qstr6" podStartSLOduration=164.441234819 podStartE2EDuration="2m44.441234819s" podCreationTimestamp="2025-11-28 11:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:44.440235727 +0000 UTC m=+182.763478944" watchObservedRunningTime="2025-11-28 11:09:44.441234819 +0000 UTC m=+182.764478046" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.458228 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.565171 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e2561d6-9bb4-415c-86b2-b41e59ac2588-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0e2561d6-9bb4-415c-86b2-b41e59ac2588\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.565300 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e2561d6-9bb4-415c-86b2-b41e59ac2588-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0e2561d6-9bb4-415c-86b2-b41e59ac2588\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.667114 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e2561d6-9bb4-415c-86b2-b41e59ac2588-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0e2561d6-9bb4-415c-86b2-b41e59ac2588\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.667493 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e2561d6-9bb4-415c-86b2-b41e59ac2588-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0e2561d6-9bb4-415c-86b2-b41e59ac2588\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.667585 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e2561d6-9bb4-415c-86b2-b41e59ac2588-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0e2561d6-9bb4-415c-86b2-b41e59ac2588\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.685977 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e2561d6-9bb4-415c-86b2-b41e59ac2588-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0e2561d6-9bb4-415c-86b2-b41e59ac2588\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.772612 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:09:44 crc kubenswrapper[4772]: I1128 11:09:44.941648 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 11:09:45 crc kubenswrapper[4772]: I1128 11:09:45.431470 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7864f" event={"ID":"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a","Type":"ContainerStarted","Data":"2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a"} Nov 28 11:09:45 crc kubenswrapper[4772]: I1128 11:09:45.432949 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0e2561d6-9bb4-415c-86b2-b41e59ac2588","Type":"ContainerStarted","Data":"5b4cc6e261d37360778f9aad6bcabf821868701f3e17878286e2afe34ae373d7"} Nov 28 11:09:45 crc kubenswrapper[4772]: I1128 11:09:45.433038 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0e2561d6-9bb4-415c-86b2-b41e59ac2588","Type":"ContainerStarted","Data":"5d411290d578c8cc83f4baa06a6df8e96fdc7ba70fd74e4ed37b44ba62596823"} Nov 28 11:09:45 crc kubenswrapper[4772]: I1128 11:09:45.453449 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7864f" podStartSLOduration=3.217463275 podStartE2EDuration="43.453427309s" podCreationTimestamp="2025-11-28 11:09:02 +0000 UTC" firstStartedPulling="2025-11-28 11:09:04.77453647 +0000 UTC m=+143.097779857" lastFinishedPulling="2025-11-28 11:09:45.010500664 +0000 UTC m=+183.333743891" observedRunningTime="2025-11-28 11:09:45.450809656 +0000 UTC m=+183.774052883" watchObservedRunningTime="2025-11-28 11:09:45.453427309 +0000 UTC m=+183.776670556" Nov 28 11:09:45 crc kubenswrapper[4772]: I1128 11:09:45.473401 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.47338004 podStartE2EDuration="1.47338004s" podCreationTimestamp="2025-11-28 11:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:45.47177772 +0000 UTC m=+183.795020957" watchObservedRunningTime="2025-11-28 11:09:45.47338004 +0000 UTC m=+183.796623287" Nov 28 11:09:46 crc kubenswrapper[4772]: I1128 11:09:46.439211 4772 generic.go:334] "Generic (PLEG): container finished" podID="0e2561d6-9bb4-415c-86b2-b41e59ac2588" containerID="5b4cc6e261d37360778f9aad6bcabf821868701f3e17878286e2afe34ae373d7" exitCode=0 Nov 28 11:09:46 crc kubenswrapper[4772]: I1128 11:09:46.439313 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0e2561d6-9bb4-415c-86b2-b41e59ac2588","Type":"ContainerDied","Data":"5b4cc6e261d37360778f9aad6bcabf821868701f3e17878286e2afe34ae373d7"} Nov 28 11:09:47 crc kubenswrapper[4772]: I1128 11:09:47.694633 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:09:47 crc kubenswrapper[4772]: I1128 11:09:47.809850 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e2561d6-9bb4-415c-86b2-b41e59ac2588-kubelet-dir\") pod \"0e2561d6-9bb4-415c-86b2-b41e59ac2588\" (UID: \"0e2561d6-9bb4-415c-86b2-b41e59ac2588\") " Nov 28 11:09:47 crc kubenswrapper[4772]: I1128 11:09:47.809997 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e2561d6-9bb4-415c-86b2-b41e59ac2588-kube-api-access\") pod \"0e2561d6-9bb4-415c-86b2-b41e59ac2588\" (UID: \"0e2561d6-9bb4-415c-86b2-b41e59ac2588\") " Nov 28 11:09:47 crc kubenswrapper[4772]: I1128 11:09:47.810017 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e2561d6-9bb4-415c-86b2-b41e59ac2588-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0e2561d6-9bb4-415c-86b2-b41e59ac2588" (UID: "0e2561d6-9bb4-415c-86b2-b41e59ac2588"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:09:47 crc kubenswrapper[4772]: I1128 11:09:47.810276 4772 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e2561d6-9bb4-415c-86b2-b41e59ac2588-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:09:47 crc kubenswrapper[4772]: I1128 11:09:47.815223 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2561d6-9bb4-415c-86b2-b41e59ac2588-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0e2561d6-9bb4-415c-86b2-b41e59ac2588" (UID: "0e2561d6-9bb4-415c-86b2-b41e59ac2588"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:09:47 crc kubenswrapper[4772]: I1128 11:09:47.910852 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e2561d6-9bb4-415c-86b2-b41e59ac2588-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:09:48 crc kubenswrapper[4772]: I1128 11:09:48.338842 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:09:48 crc kubenswrapper[4772]: I1128 11:09:48.452296 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0e2561d6-9bb4-415c-86b2-b41e59ac2588","Type":"ContainerDied","Data":"5d411290d578c8cc83f4baa06a6df8e96fdc7ba70fd74e4ed37b44ba62596823"} Nov 28 11:09:48 crc kubenswrapper[4772]: I1128 11:09:48.452331 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d411290d578c8cc83f4baa06a6df8e96fdc7ba70fd74e4ed37b44ba62596823" Nov 28 11:09:48 crc kubenswrapper[4772]: I1128 11:09:48.452342 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.031898 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 11:09:49 crc kubenswrapper[4772]: E1128 11:09:49.032109 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2561d6-9bb4-415c-86b2-b41e59ac2588" containerName="pruner" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.032121 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2561d6-9bb4-415c-86b2-b41e59ac2588" containerName="pruner" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.032241 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2561d6-9bb4-415c-86b2-b41e59ac2588" containerName="pruner" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.032617 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.034636 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.035275 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.047480 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.130896 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-var-lock\") pod \"installer-9-crc\" (UID: \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.131150 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.131294 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-kube-api-access\") pod \"installer-9-crc\" (UID: \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.232468 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-var-lock\") pod \"installer-9-crc\" (UID: \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.232618 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.232639 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-var-lock\") pod \"installer-9-crc\" (UID: \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.232656 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-kube-api-access\") pod \"installer-9-crc\" (UID: \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.232718 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.250042 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-kube-api-access\") pod \"installer-9-crc\" (UID: \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.345185 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:09:49 crc kubenswrapper[4772]: I1128 11:09:49.547234 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 11:09:50 crc kubenswrapper[4772]: I1128 11:09:50.463488 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc","Type":"ContainerStarted","Data":"b3de8f67cf6e8acf16dd779fad6d917e3ea21fe8ad714af45fe5aaa021796cf7"} Nov 28 11:09:50 crc kubenswrapper[4772]: I1128 11:09:50.465850 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc","Type":"ContainerStarted","Data":"563e1cd7b943ac10695fd565bd103a7755328f72cce50059486892bf92dc15b3"} Nov 28 11:09:50 crc kubenswrapper[4772]: I1128 11:09:50.479636 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.479613816 podStartE2EDuration="1.479613816s" podCreationTimestamp="2025-11-28 11:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:09:50.478438509 +0000 UTC m=+188.801681746" watchObservedRunningTime="2025-11-28 11:09:50.479613816 +0000 UTC m=+188.802857043" Nov 28 11:09:52 crc kubenswrapper[4772]: I1128 11:09:52.701294 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:52 crc kubenswrapper[4772]: I1128 11:09:52.701334 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:52 crc kubenswrapper[4772]: I1128 11:09:52.764242 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:53 crc kubenswrapper[4772]: I1128 11:09:53.514077 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:53 crc kubenswrapper[4772]: I1128 11:09:53.552618 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7864f"] Nov 28 11:09:53 crc kubenswrapper[4772]: I1128 11:09:53.895981 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:09:53 crc kubenswrapper[4772]: I1128 11:09:53.896060 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:09:55 crc kubenswrapper[4772]: I1128 11:09:55.486146 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7864f" podUID="200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" containerName="registry-server" containerID="cri-o://2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a" gracePeriod=2 Nov 28 11:09:55 crc kubenswrapper[4772]: I1128 11:09:55.535664 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vsrmr"] Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.221427 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.320772 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-catalog-content\") pod \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\" (UID: \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\") " Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.320819 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6qc6\" (UniqueName: \"kubernetes.io/projected/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-kube-api-access-b6qc6\") pod \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\" (UID: \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\") " Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.320856 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-utilities\") pod \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\" (UID: \"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a\") " Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.321509 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-utilities" (OuterVolumeSpecName: "utilities") pod "200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" (UID: "200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.326499 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-kube-api-access-b6qc6" (OuterVolumeSpecName: "kube-api-access-b6qc6") pod "200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" (UID: "200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a"). InnerVolumeSpecName "kube-api-access-b6qc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.422606 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6qc6\" (UniqueName: \"kubernetes.io/projected/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-kube-api-access-b6qc6\") on node \"crc\" DevicePath \"\"" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.422638 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.447949 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" (UID: "200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.493493 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkfw4" event={"ID":"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8","Type":"ContainerStarted","Data":"c0f14c8a77d843f37c9a67f50f4f67adfbe5fef575c8f5e7ce7a1027a0f5ef24"} Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.496011 4772 generic.go:334] "Generic (PLEG): container finished" podID="200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" containerID="2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a" exitCode=0 Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.496048 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7864f" event={"ID":"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a","Type":"ContainerDied","Data":"2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a"} Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.496054 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7864f" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.496073 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7864f" event={"ID":"200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a","Type":"ContainerDied","Data":"831b6e5e82aea83a682e5f6cdfa691cb974ac036ae276b6402cd077cd1ee9521"} Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.496093 4772 scope.go:117] "RemoveContainer" containerID="2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.516960 4772 scope.go:117] "RemoveContainer" containerID="cdb4664c09105b003b67f7365a42ce02ffc1670806b031522fa118d5917fa0dc" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.523573 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.535859 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7864f"] Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.538994 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7864f"] Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.540132 4772 scope.go:117] "RemoveContainer" containerID="8e4f6e48fb38a480ff4139e3a35e876812bc9ed3bede5d32d6561f40343979d1" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.661014 4772 scope.go:117] "RemoveContainer" containerID="2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a" Nov 28 11:09:56 crc kubenswrapper[4772]: E1128 11:09:56.669220 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a\": container with ID starting with 2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a not found: ID does not exist" containerID="2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.669260 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a"} err="failed to get container status \"2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a\": rpc error: code = NotFound desc = could not find container \"2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a\": container with ID starting with 2df144e5f3258dfb207064ba9cd9f68269e79ccfe0ae04242cc7e595dbb27f9a not found: ID does not exist" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.669304 4772 scope.go:117] "RemoveContainer" containerID="cdb4664c09105b003b67f7365a42ce02ffc1670806b031522fa118d5917fa0dc" Nov 28 11:09:56 crc kubenswrapper[4772]: E1128 11:09:56.669546 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb4664c09105b003b67f7365a42ce02ffc1670806b031522fa118d5917fa0dc\": container with ID starting with cdb4664c09105b003b67f7365a42ce02ffc1670806b031522fa118d5917fa0dc not found: ID does not exist" containerID="cdb4664c09105b003b67f7365a42ce02ffc1670806b031522fa118d5917fa0dc" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.669578 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb4664c09105b003b67f7365a42ce02ffc1670806b031522fa118d5917fa0dc"} err="failed to get container status \"cdb4664c09105b003b67f7365a42ce02ffc1670806b031522fa118d5917fa0dc\": rpc error: code = NotFound desc = could not find container \"cdb4664c09105b003b67f7365a42ce02ffc1670806b031522fa118d5917fa0dc\": container with ID starting with cdb4664c09105b003b67f7365a42ce02ffc1670806b031522fa118d5917fa0dc not found: ID does not exist" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.669597 4772 scope.go:117] "RemoveContainer" containerID="8e4f6e48fb38a480ff4139e3a35e876812bc9ed3bede5d32d6561f40343979d1" Nov 28 11:09:56 crc kubenswrapper[4772]: E1128 11:09:56.669844 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e4f6e48fb38a480ff4139e3a35e876812bc9ed3bede5d32d6561f40343979d1\": container with ID starting with 8e4f6e48fb38a480ff4139e3a35e876812bc9ed3bede5d32d6561f40343979d1 not found: ID does not exist" containerID="8e4f6e48fb38a480ff4139e3a35e876812bc9ed3bede5d32d6561f40343979d1" Nov 28 11:09:56 crc kubenswrapper[4772]: I1128 11:09:56.669882 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e4f6e48fb38a480ff4139e3a35e876812bc9ed3bede5d32d6561f40343979d1"} err="failed to get container status \"8e4f6e48fb38a480ff4139e3a35e876812bc9ed3bede5d32d6561f40343979d1\": rpc error: code = NotFound desc = could not find container \"8e4f6e48fb38a480ff4139e3a35e876812bc9ed3bede5d32d6561f40343979d1\": container with ID starting with 8e4f6e48fb38a480ff4139e3a35e876812bc9ed3bede5d32d6561f40343979d1 not found: ID does not exist" Nov 28 11:09:57 crc kubenswrapper[4772]: I1128 11:09:57.502850 4772 generic.go:334] "Generic (PLEG): container finished" podID="7b72c082-4b2a-4357-b95c-51d456028f86" containerID="8300abfbb49037646793b2e3d0cb65e85643aac40d9624499e404e9984984c2c" exitCode=0 Nov 28 11:09:57 crc kubenswrapper[4772]: I1128 11:09:57.502923 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gqv" event={"ID":"7b72c082-4b2a-4357-b95c-51d456028f86","Type":"ContainerDied","Data":"8300abfbb49037646793b2e3d0cb65e85643aac40d9624499e404e9984984c2c"} Nov 28 11:09:57 crc kubenswrapper[4772]: I1128 11:09:57.506939 4772 generic.go:334] "Generic (PLEG): container finished" podID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" containerID="4fb5a2cf67871f9db51e7f6f6bc8185e0d7ab36580c930084aa18dd8364cad8f" exitCode=0 Nov 28 11:09:57 crc kubenswrapper[4772]: I1128 11:09:57.506998 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt9sz" event={"ID":"f0eee58e-18fa-496f-b10f-6e590c7e39c8","Type":"ContainerDied","Data":"4fb5a2cf67871f9db51e7f6f6bc8185e0d7ab36580c930084aa18dd8364cad8f"} Nov 28 11:09:57 crc kubenswrapper[4772]: I1128 11:09:57.512706 4772 generic.go:334] "Generic (PLEG): container finished" podID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" containerID="c0f14c8a77d843f37c9a67f50f4f67adfbe5fef575c8f5e7ce7a1027a0f5ef24" exitCode=0 Nov 28 11:09:57 crc kubenswrapper[4772]: I1128 11:09:57.512765 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkfw4" event={"ID":"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8","Type":"ContainerDied","Data":"c0f14c8a77d843f37c9a67f50f4f67adfbe5fef575c8f5e7ce7a1027a0f5ef24"} Nov 28 11:09:57 crc kubenswrapper[4772]: I1128 11:09:57.515544 4772 generic.go:334] "Generic (PLEG): container finished" podID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" containerID="645fb69e8d24b92b097d02679adeb82e1a24f6251170116742a1e1b2920c90f7" exitCode=0 Nov 28 11:09:57 crc kubenswrapper[4772]: I1128 11:09:57.515569 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbzf2" event={"ID":"e3a7107f-67af-4f5a-a863-a5a39bf589e2","Type":"ContainerDied","Data":"645fb69e8d24b92b097d02679adeb82e1a24f6251170116742a1e1b2920c90f7"} Nov 28 11:09:58 crc kubenswrapper[4772]: I1128 11:09:58.004736 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" path="/var/lib/kubelet/pods/200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a/volumes" Nov 28 11:09:58 crc kubenswrapper[4772]: I1128 11:09:58.524488 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt9sz" event={"ID":"f0eee58e-18fa-496f-b10f-6e590c7e39c8","Type":"ContainerStarted","Data":"f30c98a8507ababbde14b625bf0dc966e5bf3c63ffd63a961bf82fe3003de637"} Nov 28 11:09:58 crc kubenswrapper[4772]: I1128 11:09:58.526711 4772 generic.go:334] "Generic (PLEG): container finished" podID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" containerID="9062649daa89ff34cce391f9c58488edf89ee6300b41356144587822ddaf0255" exitCode=0 Nov 28 11:09:58 crc kubenswrapper[4772]: I1128 11:09:58.526797 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9j8qq" event={"ID":"3108fd0f-46a1-45ab-b911-7a35f90a9a35","Type":"ContainerDied","Data":"9062649daa89ff34cce391f9c58488edf89ee6300b41356144587822ddaf0255"} Nov 28 11:09:58 crc kubenswrapper[4772]: I1128 11:09:58.529039 4772 generic.go:334] "Generic (PLEG): container finished" podID="fa914cfd-35f4-469e-a762-bc1dccea9f23" containerID="aa8dc5499ab34f3edbf4d5966b47aae5eb688bfb378161f3331c45863836e322" exitCode=0 Nov 28 11:09:58 crc kubenswrapper[4772]: I1128 11:09:58.529102 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6mbw" event={"ID":"fa914cfd-35f4-469e-a762-bc1dccea9f23","Type":"ContainerDied","Data":"aa8dc5499ab34f3edbf4d5966b47aae5eb688bfb378161f3331c45863836e322"} Nov 28 11:09:58 crc kubenswrapper[4772]: I1128 11:09:58.531230 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbzf2" event={"ID":"e3a7107f-67af-4f5a-a863-a5a39bf589e2","Type":"ContainerStarted","Data":"df0bb29b83570eabb54192fc05db4cb332bf44dfbcddebca9f28d9390a3f74cf"} Nov 28 11:09:58 crc kubenswrapper[4772]: I1128 11:09:58.535948 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gqv" event={"ID":"7b72c082-4b2a-4357-b95c-51d456028f86","Type":"ContainerStarted","Data":"6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37"} Nov 28 11:09:58 crc kubenswrapper[4772]: I1128 11:09:58.569650 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mt9sz" podStartSLOduration=4.269001805 podStartE2EDuration="57.569633177s" podCreationTimestamp="2025-11-28 11:09:01 +0000 UTC" firstStartedPulling="2025-11-28 11:09:04.751561752 +0000 UTC m=+143.074804979" lastFinishedPulling="2025-11-28 11:09:58.052193124 +0000 UTC m=+196.375436351" observedRunningTime="2025-11-28 11:09:58.545351589 +0000 UTC m=+196.868594816" watchObservedRunningTime="2025-11-28 11:09:58.569633177 +0000 UTC m=+196.892876404" Nov 28 11:09:58 crc kubenswrapper[4772]: I1128 11:09:58.620210 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nbzf2" podStartSLOduration=3.010974488 podStartE2EDuration="57.620190384s" podCreationTimestamp="2025-11-28 11:09:01 +0000 UTC" firstStartedPulling="2025-11-28 11:09:03.601648251 +0000 UTC m=+141.924891478" lastFinishedPulling="2025-11-28 11:09:58.210864147 +0000 UTC m=+196.534107374" observedRunningTime="2025-11-28 11:09:58.590980573 +0000 UTC m=+196.914223800" watchObservedRunningTime="2025-11-28 11:09:58.620190384 +0000 UTC m=+196.943433611" Nov 28 11:09:59 crc kubenswrapper[4772]: I1128 11:09:59.152091 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:09:59 crc kubenswrapper[4772]: I1128 11:09:59.152401 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:09:59 crc kubenswrapper[4772]: I1128 11:09:59.543417 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkfw4" event={"ID":"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8","Type":"ContainerStarted","Data":"830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a"} Nov 28 11:09:59 crc kubenswrapper[4772]: I1128 11:09:59.546538 4772 generic.go:334] "Generic (PLEG): container finished" podID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" containerID="e3c59d607027f3edf2a859c37d60136b5b2fb990c330ae27dd2ae4532f6e674b" exitCode=0 Nov 28 11:09:59 crc kubenswrapper[4772]: I1128 11:09:59.546638 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4b2ps" event={"ID":"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8","Type":"ContainerDied","Data":"e3c59d607027f3edf2a859c37d60136b5b2fb990c330ae27dd2ae4532f6e674b"} Nov 28 11:09:59 crc kubenswrapper[4772]: I1128 11:09:59.575159 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9j8qq" event={"ID":"3108fd0f-46a1-45ab-b911-7a35f90a9a35","Type":"ContainerStarted","Data":"b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3"} Nov 28 11:09:59 crc kubenswrapper[4772]: I1128 11:09:59.581475 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6mbw" event={"ID":"fa914cfd-35f4-469e-a762-bc1dccea9f23","Type":"ContainerStarted","Data":"1dd407bfecedbaf32ab4d6e7b387b4911d01c2a8570f44756a51976fbb8601ed"} Nov 28 11:09:59 crc kubenswrapper[4772]: I1128 11:09:59.614253 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g6gqv" podStartSLOduration=5.72605076 podStartE2EDuration="1m1.614236753s" podCreationTimestamp="2025-11-28 11:08:58 +0000 UTC" firstStartedPulling="2025-11-28 11:09:02.142736247 +0000 UTC m=+140.465979474" lastFinishedPulling="2025-11-28 11:09:58.03092224 +0000 UTC m=+196.354165467" observedRunningTime="2025-11-28 11:09:58.642701715 +0000 UTC m=+196.965944942" watchObservedRunningTime="2025-11-28 11:09:59.614236753 +0000 UTC m=+197.937479980" Nov 28 11:09:59 crc kubenswrapper[4772]: I1128 11:09:59.614913 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dkfw4" podStartSLOduration=5.09729016 podStartE2EDuration="1m1.614905872s" podCreationTimestamp="2025-11-28 11:08:58 +0000 UTC" firstStartedPulling="2025-11-28 11:09:01.909574185 +0000 UTC m=+140.232817412" lastFinishedPulling="2025-11-28 11:09:58.427189897 +0000 UTC m=+196.750433124" observedRunningTime="2025-11-28 11:09:59.612208596 +0000 UTC m=+197.935451823" watchObservedRunningTime="2025-11-28 11:09:59.614905872 +0000 UTC m=+197.938149099" Nov 28 11:09:59 crc kubenswrapper[4772]: I1128 11:09:59.642966 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d6mbw" podStartSLOduration=5.237220535 podStartE2EDuration="1m0.642944117s" podCreationTimestamp="2025-11-28 11:08:59 +0000 UTC" firstStartedPulling="2025-11-28 11:09:03.584813038 +0000 UTC m=+141.908056265" lastFinishedPulling="2025-11-28 11:09:58.99053662 +0000 UTC m=+197.313779847" observedRunningTime="2025-11-28 11:09:59.640132348 +0000 UTC m=+197.963375575" watchObservedRunningTime="2025-11-28 11:09:59.642944117 +0000 UTC m=+197.966187344" Nov 28 11:09:59 crc kubenswrapper[4772]: I1128 11:09:59.695663 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9j8qq" podStartSLOduration=4.413939183 podStartE2EDuration="59.695644794s" podCreationTimestamp="2025-11-28 11:09:00 +0000 UTC" firstStartedPulling="2025-11-28 11:09:03.824812928 +0000 UTC m=+142.148056145" lastFinishedPulling="2025-11-28 11:09:59.106518529 +0000 UTC m=+197.429761756" observedRunningTime="2025-11-28 11:09:59.68803463 +0000 UTC m=+198.011277857" watchObservedRunningTime="2025-11-28 11:09:59.695644794 +0000 UTC m=+198.018888021" Nov 28 11:10:00 crc kubenswrapper[4772]: I1128 11:10:00.190038 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g6gqv" podUID="7b72c082-4b2a-4357-b95c-51d456028f86" containerName="registry-server" probeResult="failure" output=< Nov 28 11:10:00 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 28 11:10:00 crc kubenswrapper[4772]: > Nov 28 11:10:00 crc kubenswrapper[4772]: I1128 11:10:00.590452 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4b2ps" event={"ID":"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8","Type":"ContainerStarted","Data":"fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b"} Nov 28 11:10:00 crc kubenswrapper[4772]: I1128 11:10:00.617907 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4b2ps" podStartSLOduration=5.141022836 podStartE2EDuration="1m1.617878129s" podCreationTimestamp="2025-11-28 11:08:59 +0000 UTC" firstStartedPulling="2025-11-28 11:09:03.825105067 +0000 UTC m=+142.148348294" lastFinishedPulling="2025-11-28 11:10:00.30196036 +0000 UTC m=+198.625203587" observedRunningTime="2025-11-28 11:10:00.613743283 +0000 UTC m=+198.936986520" watchObservedRunningTime="2025-11-28 11:10:00.617878129 +0000 UTC m=+198.941121356" Nov 28 11:10:01 crc kubenswrapper[4772]: I1128 11:10:01.238750 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:10:01 crc kubenswrapper[4772]: I1128 11:10:01.239285 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:10:01 crc kubenswrapper[4772]: I1128 11:10:01.282786 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:10:01 crc kubenswrapper[4772]: I1128 11:10:01.782766 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:10:01 crc kubenswrapper[4772]: I1128 11:10:01.782893 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:10:01 crc kubenswrapper[4772]: I1128 11:10:01.825950 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:10:02 crc kubenswrapper[4772]: I1128 11:10:02.334119 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:10:02 crc kubenswrapper[4772]: I1128 11:10:02.334187 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:10:03 crc kubenswrapper[4772]: I1128 11:10:03.371595 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mt9sz" podUID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" containerName="registry-server" probeResult="failure" output=< Nov 28 11:10:03 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 28 11:10:03 crc kubenswrapper[4772]: > Nov 28 11:10:03 crc kubenswrapper[4772]: I1128 11:10:03.639967 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:10:04 crc kubenswrapper[4772]: I1128 11:10:04.590650 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbzf2"] Nov 28 11:10:05 crc kubenswrapper[4772]: I1128 11:10:05.614224 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nbzf2" podUID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" containerName="registry-server" containerID="cri-o://df0bb29b83570eabb54192fc05db4cb332bf44dfbcddebca9f28d9390a3f74cf" gracePeriod=2 Nov 28 11:10:08 crc kubenswrapper[4772]: I1128 11:10:08.629970 4772 generic.go:334] "Generic (PLEG): container finished" podID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" containerID="df0bb29b83570eabb54192fc05db4cb332bf44dfbcddebca9f28d9390a3f74cf" exitCode=0 Nov 28 11:10:08 crc kubenswrapper[4772]: I1128 11:10:08.630033 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbzf2" event={"ID":"e3a7107f-67af-4f5a-a863-a5a39bf589e2","Type":"ContainerDied","Data":"df0bb29b83570eabb54192fc05db4cb332bf44dfbcddebca9f28d9390a3f74cf"} Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.221119 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.228961 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.229717 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.265015 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.275736 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.275852 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.388012 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a7107f-67af-4f5a-a863-a5a39bf589e2-catalog-content\") pod \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\" (UID: \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\") " Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.388116 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a7107f-67af-4f5a-a863-a5a39bf589e2-utilities\") pod \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\" (UID: \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\") " Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.388152 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l78f\" (UniqueName: \"kubernetes.io/projected/e3a7107f-67af-4f5a-a863-a5a39bf589e2-kube-api-access-5l78f\") pod \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\" (UID: \"e3a7107f-67af-4f5a-a863-a5a39bf589e2\") " Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.388984 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a7107f-67af-4f5a-a863-a5a39bf589e2-utilities" (OuterVolumeSpecName: "utilities") pod "e3a7107f-67af-4f5a-a863-a5a39bf589e2" (UID: "e3a7107f-67af-4f5a-a863-a5a39bf589e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.400544 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a7107f-67af-4f5a-a863-a5a39bf589e2-kube-api-access-5l78f" (OuterVolumeSpecName: "kube-api-access-5l78f") pod "e3a7107f-67af-4f5a-a863-a5a39bf589e2" (UID: "e3a7107f-67af-4f5a-a863-a5a39bf589e2"). InnerVolumeSpecName "kube-api-access-5l78f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.406777 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a7107f-67af-4f5a-a863-a5a39bf589e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3a7107f-67af-4f5a-a863-a5a39bf589e2" (UID: "e3a7107f-67af-4f5a-a863-a5a39bf589e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.489499 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a7107f-67af-4f5a-a863-a5a39bf589e2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.489536 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a7107f-67af-4f5a-a863-a5a39bf589e2-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.489570 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l78f\" (UniqueName: \"kubernetes.io/projected/e3a7107f-67af-4f5a-a863-a5a39bf589e2-kube-api-access-5l78f\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.545480 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.545530 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.585412 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.650834 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbzf2" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.654235 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbzf2" event={"ID":"e3a7107f-67af-4f5a-a863-a5a39bf589e2","Type":"ContainerDied","Data":"87f67841fdd1adab3b98d8d50c57f57590c45991dc34db3107476bde19f1b2cc"} Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.654294 4772 scope.go:117] "RemoveContainer" containerID="df0bb29b83570eabb54192fc05db4cb332bf44dfbcddebca9f28d9390a3f74cf" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.665509 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.665576 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.681789 4772 scope.go:117] "RemoveContainer" containerID="645fb69e8d24b92b097d02679adeb82e1a24f6251170116742a1e1b2920c90f7" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.698620 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbzf2"] Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.701906 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbzf2"] Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.718550 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.718707 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.722249 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:10:09 crc kubenswrapper[4772]: I1128 11:10:09.727194 4772 scope.go:117] "RemoveContainer" containerID="a21b84698a0116d24bb32d1e49e5f7f29ce0d25b839d6b58922fbb1ff426849f" Nov 28 11:10:10 crc kubenswrapper[4772]: I1128 11:10:10.002171 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" path="/var/lib/kubelet/pods/e3a7107f-67af-4f5a-a863-a5a39bf589e2/volumes" Nov 28 11:10:10 crc kubenswrapper[4772]: I1128 11:10:10.695516 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:10:11 crc kubenswrapper[4772]: I1128 11:10:11.289074 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:10:11 crc kubenswrapper[4772]: I1128 11:10:11.589503 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d6mbw"] Nov 28 11:10:11 crc kubenswrapper[4772]: I1128 11:10:11.661645 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d6mbw" podUID="fa914cfd-35f4-469e-a762-bc1dccea9f23" containerName="registry-server" containerID="cri-o://1dd407bfecedbaf32ab4d6e7b387b4911d01c2a8570f44756a51976fbb8601ed" gracePeriod=2 Nov 28 11:10:12 crc kubenswrapper[4772]: I1128 11:10:12.380061 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:10:12 crc kubenswrapper[4772]: I1128 11:10:12.415558 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:10:12 crc kubenswrapper[4772]: I1128 11:10:12.587969 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4b2ps"] Nov 28 11:10:12 crc kubenswrapper[4772]: I1128 11:10:12.674189 4772 generic.go:334] "Generic (PLEG): container finished" podID="fa914cfd-35f4-469e-a762-bc1dccea9f23" containerID="1dd407bfecedbaf32ab4d6e7b387b4911d01c2a8570f44756a51976fbb8601ed" exitCode=0 Nov 28 11:10:12 crc kubenswrapper[4772]: I1128 11:10:12.674279 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6mbw" event={"ID":"fa914cfd-35f4-469e-a762-bc1dccea9f23","Type":"ContainerDied","Data":"1dd407bfecedbaf32ab4d6e7b387b4911d01c2a8570f44756a51976fbb8601ed"} Nov 28 11:10:12 crc kubenswrapper[4772]: I1128 11:10:12.674524 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4b2ps" podUID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" containerName="registry-server" containerID="cri-o://fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b" gracePeriod=2 Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.009321 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.116810 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.132173 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-catalog-content\") pod \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\" (UID: \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\") " Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.132382 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hbvb\" (UniqueName: \"kubernetes.io/projected/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-kube-api-access-5hbvb\") pod \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\" (UID: \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\") " Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.132497 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-utilities\") pod \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\" (UID: \"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8\") " Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.133780 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-utilities" (OuterVolumeSpecName: "utilities") pod "d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" (UID: "d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.136082 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.141311 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-kube-api-access-5hbvb" (OuterVolumeSpecName: "kube-api-access-5hbvb") pod "d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" (UID: "d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8"). InnerVolumeSpecName "kube-api-access-5hbvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.181513 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" (UID: "d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.237480 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7p7h\" (UniqueName: \"kubernetes.io/projected/fa914cfd-35f4-469e-a762-bc1dccea9f23-kube-api-access-g7p7h\") pod \"fa914cfd-35f4-469e-a762-bc1dccea9f23\" (UID: \"fa914cfd-35f4-469e-a762-bc1dccea9f23\") " Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.237595 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa914cfd-35f4-469e-a762-bc1dccea9f23-utilities\") pod \"fa914cfd-35f4-469e-a762-bc1dccea9f23\" (UID: \"fa914cfd-35f4-469e-a762-bc1dccea9f23\") " Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.237625 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa914cfd-35f4-469e-a762-bc1dccea9f23-catalog-content\") pod \"fa914cfd-35f4-469e-a762-bc1dccea9f23\" (UID: \"fa914cfd-35f4-469e-a762-bc1dccea9f23\") " Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.237844 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.237863 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hbvb\" (UniqueName: \"kubernetes.io/projected/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8-kube-api-access-5hbvb\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.238353 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa914cfd-35f4-469e-a762-bc1dccea9f23-utilities" (OuterVolumeSpecName: "utilities") pod "fa914cfd-35f4-469e-a762-bc1dccea9f23" (UID: "fa914cfd-35f4-469e-a762-bc1dccea9f23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.240254 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa914cfd-35f4-469e-a762-bc1dccea9f23-kube-api-access-g7p7h" (OuterVolumeSpecName: "kube-api-access-g7p7h") pod "fa914cfd-35f4-469e-a762-bc1dccea9f23" (UID: "fa914cfd-35f4-469e-a762-bc1dccea9f23"). InnerVolumeSpecName "kube-api-access-g7p7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.280567 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa914cfd-35f4-469e-a762-bc1dccea9f23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa914cfd-35f4-469e-a762-bc1dccea9f23" (UID: "fa914cfd-35f4-469e-a762-bc1dccea9f23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.339300 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7p7h\" (UniqueName: \"kubernetes.io/projected/fa914cfd-35f4-469e-a762-bc1dccea9f23-kube-api-access-g7p7h\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.339631 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa914cfd-35f4-469e-a762-bc1dccea9f23-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.339643 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa914cfd-35f4-469e-a762-bc1dccea9f23-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.683181 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6mbw" event={"ID":"fa914cfd-35f4-469e-a762-bc1dccea9f23","Type":"ContainerDied","Data":"cece4cc62fcba71f15985254690a866e1921c6914f3454e846e295b055dd6844"} Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.683221 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6mbw" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.683245 4772 scope.go:117] "RemoveContainer" containerID="1dd407bfecedbaf32ab4d6e7b387b4911d01c2a8570f44756a51976fbb8601ed" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.687463 4772 generic.go:334] "Generic (PLEG): container finished" podID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" containerID="fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b" exitCode=0 Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.687499 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4b2ps" event={"ID":"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8","Type":"ContainerDied","Data":"fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b"} Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.687521 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4b2ps" event={"ID":"d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8","Type":"ContainerDied","Data":"4289cbbc212f5b797765e01fec47e7450896e854fdb77602c1b333df330a7881"} Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.687565 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4b2ps" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.701825 4772 scope.go:117] "RemoveContainer" containerID="aa8dc5499ab34f3edbf4d5966b47aae5eb688bfb378161f3331c45863836e322" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.722715 4772 scope.go:117] "RemoveContainer" containerID="609e1ee42aacba3d7f2b6410484dafc2f02d110e42c00a9a6ed04087e043441f" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.733220 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d6mbw"] Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.737076 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d6mbw"] Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.746214 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4b2ps"] Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.750046 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4b2ps"] Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.763940 4772 scope.go:117] "RemoveContainer" containerID="fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b" Nov 28 11:10:13 crc kubenswrapper[4772]: E1128 11:10:13.764479 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa914cfd_35f4_469e_a762_bc1dccea9f23.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd53c2f7b_8b36_4a43_9eaf_8ed96aa2d2c8.slice\": RecentStats: unable to find data in memory cache]" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.777629 4772 scope.go:117] "RemoveContainer" containerID="e3c59d607027f3edf2a859c37d60136b5b2fb990c330ae27dd2ae4532f6e674b" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.810197 4772 scope.go:117] "RemoveContainer" containerID="bbdb80703f22b497c51dd2c8ecf3dcf0e02b32c38e324a6b97602cac811b43d6" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.837133 4772 scope.go:117] "RemoveContainer" containerID="fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b" Nov 28 11:10:13 crc kubenswrapper[4772]: E1128 11:10:13.837524 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b\": container with ID starting with fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b not found: ID does not exist" containerID="fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.837560 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b"} err="failed to get container status \"fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b\": rpc error: code = NotFound desc = could not find container \"fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b\": container with ID starting with fe7846745aa357811bcc63531f97e7526ca0aef2d38d4ceb823927eda4c60a9b not found: ID does not exist" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.837588 4772 scope.go:117] "RemoveContainer" containerID="e3c59d607027f3edf2a859c37d60136b5b2fb990c330ae27dd2ae4532f6e674b" Nov 28 11:10:13 crc kubenswrapper[4772]: E1128 11:10:13.838107 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c59d607027f3edf2a859c37d60136b5b2fb990c330ae27dd2ae4532f6e674b\": container with ID starting with e3c59d607027f3edf2a859c37d60136b5b2fb990c330ae27dd2ae4532f6e674b not found: ID does not exist" containerID="e3c59d607027f3edf2a859c37d60136b5b2fb990c330ae27dd2ae4532f6e674b" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.838156 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c59d607027f3edf2a859c37d60136b5b2fb990c330ae27dd2ae4532f6e674b"} err="failed to get container status \"e3c59d607027f3edf2a859c37d60136b5b2fb990c330ae27dd2ae4532f6e674b\": rpc error: code = NotFound desc = could not find container \"e3c59d607027f3edf2a859c37d60136b5b2fb990c330ae27dd2ae4532f6e674b\": container with ID starting with e3c59d607027f3edf2a859c37d60136b5b2fb990c330ae27dd2ae4532f6e674b not found: ID does not exist" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.838187 4772 scope.go:117] "RemoveContainer" containerID="bbdb80703f22b497c51dd2c8ecf3dcf0e02b32c38e324a6b97602cac811b43d6" Nov 28 11:10:13 crc kubenswrapper[4772]: E1128 11:10:13.838512 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbdb80703f22b497c51dd2c8ecf3dcf0e02b32c38e324a6b97602cac811b43d6\": container with ID starting with bbdb80703f22b497c51dd2c8ecf3dcf0e02b32c38e324a6b97602cac811b43d6 not found: ID does not exist" containerID="bbdb80703f22b497c51dd2c8ecf3dcf0e02b32c38e324a6b97602cac811b43d6" Nov 28 11:10:13 crc kubenswrapper[4772]: I1128 11:10:13.838536 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbdb80703f22b497c51dd2c8ecf3dcf0e02b32c38e324a6b97602cac811b43d6"} err="failed to get container status \"bbdb80703f22b497c51dd2c8ecf3dcf0e02b32c38e324a6b97602cac811b43d6\": rpc error: code = NotFound desc = could not find container \"bbdb80703f22b497c51dd2c8ecf3dcf0e02b32c38e324a6b97602cac811b43d6\": container with ID starting with bbdb80703f22b497c51dd2c8ecf3dcf0e02b32c38e324a6b97602cac811b43d6 not found: ID does not exist" Nov 28 11:10:14 crc kubenswrapper[4772]: I1128 11:10:14.000751 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" path="/var/lib/kubelet/pods/d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8/volumes" Nov 28 11:10:14 crc kubenswrapper[4772]: I1128 11:10:14.001326 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa914cfd-35f4-469e-a762-bc1dccea9f23" path="/var/lib/kubelet/pods/fa914cfd-35f4-469e-a762-bc1dccea9f23/volumes" Nov 28 11:10:20 crc kubenswrapper[4772]: I1128 11:10:20.576959 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" podUID="fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" containerName="oauth-openshift" containerID="cri-o://0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa" gracePeriod=15 Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.656941 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.686915 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6686467b65-pzjtt"] Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687169 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" containerName="extract-content" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687197 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" containerName="extract-content" Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687220 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa914cfd-35f4-469e-a762-bc1dccea9f23" containerName="registry-server" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687229 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa914cfd-35f4-469e-a762-bc1dccea9f23" containerName="registry-server" Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687242 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" containerName="extract-content" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687250 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" containerName="extract-content" Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687262 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" containerName="registry-server" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687296 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" containerName="registry-server" Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687308 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" containerName="extract-utilities" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687316 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" containerName="extract-utilities" Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687327 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" containerName="registry-server" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687335 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" containerName="registry-server" Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687349 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa914cfd-35f4-469e-a762-bc1dccea9f23" containerName="extract-utilities" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687378 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa914cfd-35f4-469e-a762-bc1dccea9f23" containerName="extract-utilities" Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687391 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" containerName="extract-utilities" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687399 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" containerName="extract-utilities" Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687412 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" containerName="registry-server" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687420 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" containerName="registry-server" Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687431 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" containerName="extract-content" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687439 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" containerName="extract-content" Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687458 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" containerName="oauth-openshift" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687467 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" containerName="oauth-openshift" Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687481 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" containerName="extract-utilities" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687490 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" containerName="extract-utilities" Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.687504 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa914cfd-35f4-469e-a762-bc1dccea9f23" containerName="extract-content" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687512 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa914cfd-35f4-469e-a762-bc1dccea9f23" containerName="extract-content" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687621 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa914cfd-35f4-469e-a762-bc1dccea9f23" containerName="registry-server" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687635 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53c2f7b-8b36-4a43-9eaf-8ed96aa2d2c8" containerName="registry-server" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687648 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a7107f-67af-4f5a-a863-a5a39bf589e2" containerName="registry-server" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687661 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" containerName="oauth-openshift" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.687676 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="200ce6b8-9c0e-4eb0-8011-aa3b4ebbb22a" containerName="registry-server" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.688176 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.697309 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6686467b65-pzjtt"] Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.742231 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" containerID="0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa" exitCode=0 Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.742269 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" event={"ID":"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6","Type":"ContainerDied","Data":"0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa"} Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.742297 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" event={"ID":"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6","Type":"ContainerDied","Data":"541b83c76470e2ebad5692aa0ac94a46cd20949c777da174adf96e302de078c2"} Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.742314 4772 scope.go:117] "RemoveContainer" containerID="0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.742538 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vsrmr" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.760558 4772 scope.go:117] "RemoveContainer" containerID="0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.760853 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-service-ca\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.760923 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-error\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.760961 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-serving-cert\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: E1128 11:10:23.760980 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa\": container with ID starting with 0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa not found: ID does not exist" containerID="0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761001 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-ocp-branding-template\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761035 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-audit-dir\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761064 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45jjs\" (UniqueName: \"kubernetes.io/projected/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-kube-api-access-45jjs\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761094 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-trusted-ca-bundle\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761120 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-router-certs\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761164 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-idp-0-file-data\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761191 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-cliconfig\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761215 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-provider-selection\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761245 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-session\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761296 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-login\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761329 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-audit-policies\") pod \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\" (UID: \"fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6\") " Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761460 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-user-template-login\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761022 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa"} err="failed to get container status \"0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa\": rpc error: code = NotFound desc = could not find container \"0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa\": container with ID starting with 0fa72c8bfb52603f7dcdd4fb82dc390cc37765362f4d293ddbb512b77ecb2ffa not found: ID does not exist" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761494 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ecdc26e-aa91-41f3-a565-d212036fe1fc-audit-dir\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761518 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761524 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761540 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-user-template-error\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761607 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761644 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sfh6\" (UniqueName: \"kubernetes.io/projected/3ecdc26e-aa91-41f3-a565-d212036fe1fc-kube-api-access-2sfh6\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761691 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-session\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761722 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761763 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-router-certs\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761825 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3ecdc26e-aa91-41f3-a565-d212036fe1fc-audit-policies\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761847 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761888 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761907 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-service-ca\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.761930 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.762049 4772 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.762140 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.762158 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.762729 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.762743 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.766663 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.766848 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.766890 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-kube-api-access-45jjs" (OuterVolumeSpecName: "kube-api-access-45jjs") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "kube-api-access-45jjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.767129 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.767418 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.767724 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.767864 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.774705 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.774731 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" (UID: "fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862620 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-session\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862665 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862692 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-router-certs\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862719 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862739 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3ecdc26e-aa91-41f3-a565-d212036fe1fc-audit-policies\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862760 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862774 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-service-ca\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862794 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862845 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-user-template-login\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862874 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ecdc26e-aa91-41f3-a565-d212036fe1fc-audit-dir\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862899 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862927 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-user-template-error\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862951 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.862971 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sfh6\" (UniqueName: \"kubernetes.io/projected/3ecdc26e-aa91-41f3-a565-d212036fe1fc-kube-api-access-2sfh6\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863011 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863022 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863034 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863045 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863055 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45jjs\" (UniqueName: \"kubernetes.io/projected/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-kube-api-access-45jjs\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863067 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863076 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863086 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863095 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863106 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863115 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863126 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863136 4772 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863660 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3ecdc26e-aa91-41f3-a565-d212036fe1fc-audit-policies\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.863727 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ecdc26e-aa91-41f3-a565-d212036fe1fc-audit-dir\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.864114 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-service-ca\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.864392 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.865070 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.865647 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-session\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.866501 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-user-template-login\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.866728 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-router-certs\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.866756 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.868098 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.868376 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.869104 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.869709 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3ecdc26e-aa91-41f3-a565-d212036fe1fc-v4-0-config-user-template-error\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.879858 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sfh6\" (UniqueName: \"kubernetes.io/projected/3ecdc26e-aa91-41f3-a565-d212036fe1fc-kube-api-access-2sfh6\") pod \"oauth-openshift-6686467b65-pzjtt\" (UID: \"3ecdc26e-aa91-41f3-a565-d212036fe1fc\") " pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.896156 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.896208 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.896249 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.896758 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:10:23 crc kubenswrapper[4772]: I1128 11:10:23.896815 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a" gracePeriod=600 Nov 28 11:10:24 crc kubenswrapper[4772]: I1128 11:10:24.007511 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:24 crc kubenswrapper[4772]: I1128 11:10:24.063274 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vsrmr"] Nov 28 11:10:24 crc kubenswrapper[4772]: I1128 11:10:24.067016 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vsrmr"] Nov 28 11:10:24 crc kubenswrapper[4772]: I1128 11:10:24.208433 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6686467b65-pzjtt"] Nov 28 11:10:24 crc kubenswrapper[4772]: W1128 11:10:24.214240 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ecdc26e_aa91_41f3_a565_d212036fe1fc.slice/crio-94091fa1e6b921530cce4e77f4d68aca33aaab44aea0bcf5e54ff7cac4421c5f WatchSource:0}: Error finding container 94091fa1e6b921530cce4e77f4d68aca33aaab44aea0bcf5e54ff7cac4421c5f: Status 404 returned error can't find the container with id 94091fa1e6b921530cce4e77f4d68aca33aaab44aea0bcf5e54ff7cac4421c5f Nov 28 11:10:24 crc kubenswrapper[4772]: I1128 11:10:24.749835 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" event={"ID":"3ecdc26e-aa91-41f3-a565-d212036fe1fc","Type":"ContainerStarted","Data":"e63fec9f5da09b80c886a7ff8ff9411253a07edf8fc4fc8fd9ec51c54d5aa717"} Nov 28 11:10:24 crc kubenswrapper[4772]: I1128 11:10:24.750135 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" event={"ID":"3ecdc26e-aa91-41f3-a565-d212036fe1fc","Type":"ContainerStarted","Data":"94091fa1e6b921530cce4e77f4d68aca33aaab44aea0bcf5e54ff7cac4421c5f"} Nov 28 11:10:24 crc kubenswrapper[4772]: I1128 11:10:24.750476 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:24 crc kubenswrapper[4772]: I1128 11:10:24.753393 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a" exitCode=0 Nov 28 11:10:24 crc kubenswrapper[4772]: I1128 11:10:24.753501 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a"} Nov 28 11:10:24 crc kubenswrapper[4772]: I1128 11:10:24.753661 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"beef7ccb3e0e2e5ae83a32f266ec8c15aa9fff63861b33defb11415294193bf3"} Nov 28 11:10:24 crc kubenswrapper[4772]: I1128 11:10:24.778386 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" podStartSLOduration=29.778347887 podStartE2EDuration="29.778347887s" podCreationTimestamp="2025-11-28 11:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:10:24.773928123 +0000 UTC m=+223.097171350" watchObservedRunningTime="2025-11-28 11:10:24.778347887 +0000 UTC m=+223.101591114" Nov 28 11:10:24 crc kubenswrapper[4772]: I1128 11:10:24.837416 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6686467b65-pzjtt" Nov 28 11:10:26 crc kubenswrapper[4772]: I1128 11:10:26.005454 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6" path="/var/lib/kubelet/pods/fd8e4b4f-a745-4a31-b1ed-b0b220c4e1e6/volumes" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.965831 4772 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.966602 4772 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.966723 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.966886 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b" gracePeriod=15 Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.966992 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c" gracePeriod=15 Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.967033 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a" gracePeriod=15 Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.967105 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c" gracePeriod=15 Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.967087 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589" gracePeriod=15 Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.969555 4772 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 11:10:27 crc kubenswrapper[4772]: E1128 11:10:27.969739 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.969777 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 11:10:27 crc kubenswrapper[4772]: E1128 11:10:27.969788 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.969797 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 11:10:27 crc kubenswrapper[4772]: E1128 11:10:27.969808 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.969816 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 11:10:27 crc kubenswrapper[4772]: E1128 11:10:27.969827 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.969835 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 11:10:27 crc kubenswrapper[4772]: E1128 11:10:27.969848 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.969855 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 11:10:27 crc kubenswrapper[4772]: E1128 11:10:27.969867 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.969873 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.970030 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.970046 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.970054 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.970061 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.970070 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.970078 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 11:10:27 crc kubenswrapper[4772]: E1128 11:10:27.970181 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 11:10:27 crc kubenswrapper[4772]: I1128 11:10:27.970189 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.005524 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.131083 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.131351 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.131396 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.131446 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.131470 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.131492 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.131530 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.131552 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.232702 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.232759 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.232791 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.232822 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.232853 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.232856 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.232919 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.232941 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.232940 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.232970 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.232986 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.233009 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.232940 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.233014 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.233081 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.233197 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.302199 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:10:28 crc kubenswrapper[4772]: W1128 11:10:28.320021 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-79edbce044bd8e210c277520c29401bab5f8771cf3ce49e03f5396917823a6a4 WatchSource:0}: Error finding container 79edbce044bd8e210c277520c29401bab5f8771cf3ce49e03f5396917823a6a4: Status 404 returned error can't find the container with id 79edbce044bd8e210c277520c29401bab5f8771cf3ce49e03f5396917823a6a4 Nov 28 11:10:28 crc kubenswrapper[4772]: E1128 11:10:28.322143 4772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c272d044d79f7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 11:10:28.321589751 +0000 UTC m=+226.644832978,LastTimestamp:2025-11-28 11:10:28.321589751 +0000 UTC m=+226.644832978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.782428 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.784180 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.784786 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c" exitCode=0 Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.784865 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c" exitCode=0 Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.784879 4772 scope.go:117] "RemoveContainer" containerID="2f034df30dd51680814955b4ba5b32d02e9505138a4dc59c749b707775a9d404" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.784925 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589" exitCode=0 Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.784999 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a" exitCode=2 Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.786179 4772 generic.go:334] "Generic (PLEG): container finished" podID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" containerID="b3de8f67cf6e8acf16dd779fad6d917e3ea21fe8ad714af45fe5aaa021796cf7" exitCode=0 Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.786222 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc","Type":"ContainerDied","Data":"b3de8f67cf6e8acf16dd779fad6d917e3ea21fe8ad714af45fe5aaa021796cf7"} Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.786784 4772 status_manager.go:851] "Failed to get status for pod" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.787068 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.787723 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"81784daa6e0aa6871db3b461a7dd136a1003d5cc34fd688870850af1a5609cf3"} Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.787764 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"79edbce044bd8e210c277520c29401bab5f8771cf3ce49e03f5396917823a6a4"} Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.788134 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:28 crc kubenswrapper[4772]: I1128 11:10:28.788277 4772 status_manager.go:851] "Failed to get status for pod" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:29 crc kubenswrapper[4772]: I1128 11:10:29.798258 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.112641 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.113845 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.114172 4772 status_manager.go:851] "Failed to get status for pod" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.267641 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-kubelet-dir\") pod \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\" (UID: \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\") " Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.267997 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-kube-api-access\") pod \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\" (UID: \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\") " Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.267785 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" (UID: "e2bcba8f-3490-4f32-9c89-0cabe7bf37cc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.268045 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-var-lock\") pod \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\" (UID: \"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc\") " Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.268073 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-var-lock" (OuterVolumeSpecName: "var-lock") pod "e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" (UID: "e2bcba8f-3490-4f32-9c89-0cabe7bf37cc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.268594 4772 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.268622 4772 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.273965 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" (UID: "e2bcba8f-3490-4f32-9c89-0cabe7bf37cc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.325898 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.326697 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.327525 4772 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.328598 4772 status_manager.go:851] "Failed to get status for pod" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.329006 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.370281 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2bcba8f-3490-4f32-9c89-0cabe7bf37cc-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.472094 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.472252 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.473033 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.473143 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.473421 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.473261 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.474353 4772 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.474389 4772 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.474399 4772 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.811463 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.812698 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.812655 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b" exitCode=0 Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.812829 4772 scope.go:117] "RemoveContainer" containerID="53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.816413 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e2bcba8f-3490-4f32-9c89-0cabe7bf37cc","Type":"ContainerDied","Data":"563e1cd7b943ac10695fd565bd103a7755328f72cce50059486892bf92dc15b3"} Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.816496 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="563e1cd7b943ac10695fd565bd103a7755328f72cce50059486892bf92dc15b3" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.816519 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.829674 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.829926 4772 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.830099 4772 status_manager.go:851] "Failed to get status for pod" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.840132 4772 scope.go:117] "RemoveContainer" containerID="8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.844695 4772 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.845153 4772 status_manager.go:851] "Failed to get status for pod" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.845345 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.858510 4772 scope.go:117] "RemoveContainer" containerID="7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.878165 4772 scope.go:117] "RemoveContainer" containerID="201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.895226 4772 scope.go:117] "RemoveContainer" containerID="78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.919375 4772 scope.go:117] "RemoveContainer" containerID="db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.946310 4772 scope.go:117] "RemoveContainer" containerID="53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c" Nov 28 11:10:30 crc kubenswrapper[4772]: E1128 11:10:30.946737 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\": container with ID starting with 53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c not found: ID does not exist" containerID="53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.946774 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c"} err="failed to get container status \"53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\": rpc error: code = NotFound desc = could not find container \"53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c\": container with ID starting with 53801a5bb7091e683de6e3e4d60bc6d0409c4882bb6bc4695727addc526f3d9c not found: ID does not exist" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.946796 4772 scope.go:117] "RemoveContainer" containerID="8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c" Nov 28 11:10:30 crc kubenswrapper[4772]: E1128 11:10:30.947027 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\": container with ID starting with 8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c not found: ID does not exist" containerID="8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.947058 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c"} err="failed to get container status \"8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\": rpc error: code = NotFound desc = could not find container \"8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c\": container with ID starting with 8fb32338d956f572935b6bc7bf17092fde94cd2901bea214047c60019d18ce9c not found: ID does not exist" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.947078 4772 scope.go:117] "RemoveContainer" containerID="7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589" Nov 28 11:10:30 crc kubenswrapper[4772]: E1128 11:10:30.947691 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\": container with ID starting with 7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589 not found: ID does not exist" containerID="7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.947723 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589"} err="failed to get container status \"7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\": rpc error: code = NotFound desc = could not find container \"7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589\": container with ID starting with 7c0a9990e589fc0f4734e2fa5742023aebbc92d1cf7712410c3e8b43262ba589 not found: ID does not exist" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.947797 4772 scope.go:117] "RemoveContainer" containerID="201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a" Nov 28 11:10:30 crc kubenswrapper[4772]: E1128 11:10:30.948246 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\": container with ID starting with 201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a not found: ID does not exist" containerID="201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.948276 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a"} err="failed to get container status \"201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\": rpc error: code = NotFound desc = could not find container \"201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a\": container with ID starting with 201163ed098ed24611a5edfcab8b3a7b9c34052a111235606f8430e43f5b193a not found: ID does not exist" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.948296 4772 scope.go:117] "RemoveContainer" containerID="78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b" Nov 28 11:10:30 crc kubenswrapper[4772]: E1128 11:10:30.949183 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\": container with ID starting with 78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b not found: ID does not exist" containerID="78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.949213 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b"} err="failed to get container status \"78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\": rpc error: code = NotFound desc = could not find container \"78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b\": container with ID starting with 78e4a5ba03877918cea1f98f72f8ef3deb0415b28463cb95ec01417ce25fd46b not found: ID does not exist" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.949225 4772 scope.go:117] "RemoveContainer" containerID="db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5" Nov 28 11:10:30 crc kubenswrapper[4772]: E1128 11:10:30.950057 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\": container with ID starting with db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5 not found: ID does not exist" containerID="db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5" Nov 28 11:10:30 crc kubenswrapper[4772]: I1128 11:10:30.950110 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5"} err="failed to get container status \"db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\": rpc error: code = NotFound desc = could not find container \"db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5\": container with ID starting with db037338ff092af737adbc485371135e4219665218284cdd79e44049b8ee86b5 not found: ID does not exist" Nov 28 11:10:31 crc kubenswrapper[4772]: I1128 11:10:31.998547 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:32 crc kubenswrapper[4772]: I1128 11:10:32.000683 4772 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:32 crc kubenswrapper[4772]: I1128 11:10:32.001216 4772 status_manager.go:851] "Failed to get status for pod" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:32 crc kubenswrapper[4772]: I1128 11:10:32.007146 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 28 11:10:33 crc kubenswrapper[4772]: E1128 11:10:33.950426 4772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c272d044d79f7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 11:10:28.321589751 +0000 UTC m=+226.644832978,LastTimestamp:2025-11-28 11:10:28.321589751 +0000 UTC m=+226.644832978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 11:10:35 crc kubenswrapper[4772]: E1128 11:10:35.657376 4772 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:35 crc kubenswrapper[4772]: E1128 11:10:35.657948 4772 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:35 crc kubenswrapper[4772]: E1128 11:10:35.658207 4772 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:35 crc kubenswrapper[4772]: E1128 11:10:35.658423 4772 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:35 crc kubenswrapper[4772]: E1128 11:10:35.658582 4772 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:35 crc kubenswrapper[4772]: I1128 11:10:35.658609 4772 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 28 11:10:35 crc kubenswrapper[4772]: E1128 11:10:35.658796 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Nov 28 11:10:35 crc kubenswrapper[4772]: E1128 11:10:35.859715 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Nov 28 11:10:36 crc kubenswrapper[4772]: E1128 11:10:36.260955 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Nov 28 11:10:36 crc kubenswrapper[4772]: E1128 11:10:36.364381 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:10:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:10:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:10:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:10:36Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:36 crc kubenswrapper[4772]: E1128 11:10:36.364946 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:36 crc kubenswrapper[4772]: E1128 11:10:36.365335 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:36 crc kubenswrapper[4772]: E1128 11:10:36.365717 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:36 crc kubenswrapper[4772]: E1128 11:10:36.366222 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:36 crc kubenswrapper[4772]: E1128 11:10:36.366244 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:10:37 crc kubenswrapper[4772]: E1128 11:10:37.062346 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Nov 28 11:10:38 crc kubenswrapper[4772]: E1128 11:10:38.663719 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="3.2s" Nov 28 11:10:41 crc kubenswrapper[4772]: E1128 11:10:41.865327 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="6.4s" Nov 28 11:10:41 crc kubenswrapper[4772]: I1128 11:10:41.996836 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:41 crc kubenswrapper[4772]: I1128 11:10:41.997137 4772 status_manager.go:851] "Failed to get status for pod" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:42 crc kubenswrapper[4772]: I1128 11:10:42.899856 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 11:10:42 crc kubenswrapper[4772]: I1128 11:10:42.900180 4772 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa" exitCode=1 Nov 28 11:10:42 crc kubenswrapper[4772]: I1128 11:10:42.900225 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa"} Nov 28 11:10:42 crc kubenswrapper[4772]: I1128 11:10:42.900796 4772 scope.go:117] "RemoveContainer" containerID="17ab08bcda5ca1bd3e80fd73ef8698b8df0ad5b120898c4b8e8325c9cd235eaa" Nov 28 11:10:42 crc kubenswrapper[4772]: I1128 11:10:42.901475 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:42 crc kubenswrapper[4772]: I1128 11:10:42.902277 4772 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:42 crc kubenswrapper[4772]: I1128 11:10:42.902838 4772 status_manager.go:851] "Failed to get status for pod" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:42 crc kubenswrapper[4772]: I1128 11:10:42.993704 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:42 crc kubenswrapper[4772]: I1128 11:10:42.994855 4772 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:42 crc kubenswrapper[4772]: I1128 11:10:42.995408 4772 status_manager.go:851] "Failed to get status for pod" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:42 crc kubenswrapper[4772]: I1128 11:10:42.995826 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.016833 4772 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="17176782-0587-4f15-a744-3cb248f523e0" Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.016872 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="17176782-0587-4f15-a744-3cb248f523e0" Nov 28 11:10:43 crc kubenswrapper[4772]: E1128 11:10:43.017384 4772 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.017965 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:43 crc kubenswrapper[4772]: W1128 11:10:43.034240 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ce79f6941ad1f103e1612c5e9a98e96eeaeb2c7d3b0a84650716817346c2f6de WatchSource:0}: Error finding container ce79f6941ad1f103e1612c5e9a98e96eeaeb2c7d3b0a84650716817346c2f6de: Status 404 returned error can't find the container with id ce79f6941ad1f103e1612c5e9a98e96eeaeb2c7d3b0a84650716817346c2f6de Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.107973 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.909851 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.910292 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"369992be533ab3e72e08be823cea148a7bfa057bd930f19fd4068a933c626f9a"} Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.911196 4772 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.911707 4772 status_manager.go:851] "Failed to get status for pod" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.911728 4772 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="38c6b02805b8b55cce4a35fdd0cd0016e1501bc3394d8070d074b463216beea1" exitCode=0 Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.911753 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"38c6b02805b8b55cce4a35fdd0cd0016e1501bc3394d8070d074b463216beea1"} Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.912010 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ce79f6941ad1f103e1612c5e9a98e96eeaeb2c7d3b0a84650716817346c2f6de"} Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.912181 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.912549 4772 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="17176782-0587-4f15-a744-3cb248f523e0" Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.912588 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="17176782-0587-4f15-a744-3cb248f523e0" Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.912808 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:43 crc kubenswrapper[4772]: E1128 11:10:43.913064 4772 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.913349 4772 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:43 crc kubenswrapper[4772]: I1128 11:10:43.913844 4772 status_manager.go:851] "Failed to get status for pod" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 11:10:43 crc kubenswrapper[4772]: E1128 11:10:43.951736 4772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c272d044d79f7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 11:10:28.321589751 +0000 UTC m=+226.644832978,LastTimestamp:2025-11-28 11:10:28.321589751 +0000 UTC m=+226.644832978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 11:10:44 crc kubenswrapper[4772]: I1128 11:10:44.929301 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"764b82d3ed9a74396c4798868d8f77d6bb2a177238e88a456a27bdf3a2998678"} Nov 28 11:10:44 crc kubenswrapper[4772]: I1128 11:10:44.929648 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"adfeda5696eadaca3d29dbcb7dbd5bf92768b7a1c64ecc8dae161aa26ab03e3f"} Nov 28 11:10:44 crc kubenswrapper[4772]: I1128 11:10:44.929666 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4262ef2b8936e0c2429d8bd717a212b06fefff329fe10e70b96f2fc82175974a"} Nov 28 11:10:44 crc kubenswrapper[4772]: I1128 11:10:44.929676 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0247ba6a73c26610e659c0a5daf726072c2687c2dc5094fcbd49042ba3caa9dc"} Nov 28 11:10:45 crc kubenswrapper[4772]: I1128 11:10:45.936829 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c84318eb3c1aab619cdfff8549c28ba3bf9f6acbaa648e795789ea9099f995a4"} Nov 28 11:10:45 crc kubenswrapper[4772]: I1128 11:10:45.937169 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:45 crc kubenswrapper[4772]: I1128 11:10:45.937048 4772 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="17176782-0587-4f15-a744-3cb248f523e0" Nov 28 11:10:45 crc kubenswrapper[4772]: I1128 11:10:45.937191 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="17176782-0587-4f15-a744-3cb248f523e0" Nov 28 11:10:47 crc kubenswrapper[4772]: I1128 11:10:47.326499 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:10:48 crc kubenswrapper[4772]: I1128 11:10:48.018340 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:48 crc kubenswrapper[4772]: I1128 11:10:48.018853 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:48 crc kubenswrapper[4772]: I1128 11:10:48.023068 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:50 crc kubenswrapper[4772]: I1128 11:10:50.944236 4772 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:50 crc kubenswrapper[4772]: I1128 11:10:50.964315 4772 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="17176782-0587-4f15-a744-3cb248f523e0" Nov 28 11:10:50 crc kubenswrapper[4772]: I1128 11:10:50.964347 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="17176782-0587-4f15-a744-3cb248f523e0" Nov 28 11:10:50 crc kubenswrapper[4772]: I1128 11:10:50.968008 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:10:51 crc kubenswrapper[4772]: I1128 11:10:51.969694 4772 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="17176782-0587-4f15-a744-3cb248f523e0" Nov 28 11:10:51 crc kubenswrapper[4772]: I1128 11:10:51.970012 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="17176782-0587-4f15-a744-3cb248f523e0" Nov 28 11:10:52 crc kubenswrapper[4772]: I1128 11:10:52.014251 4772 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="36394802-8699-4582-8fe3-114c3cffced5" Nov 28 11:10:53 crc kubenswrapper[4772]: I1128 11:10:53.106798 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:10:53 crc kubenswrapper[4772]: I1128 11:10:53.111243 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:10:53 crc kubenswrapper[4772]: I1128 11:10:53.982284 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:10:59 crc kubenswrapper[4772]: I1128 11:10:59.901178 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 11:11:00 crc kubenswrapper[4772]: I1128 11:11:00.317775 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 11:11:00 crc kubenswrapper[4772]: I1128 11:11:00.528433 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 11:11:00 crc kubenswrapper[4772]: I1128 11:11:00.861200 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 11:11:01 crc kubenswrapper[4772]: I1128 11:11:01.227748 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 11:11:01 crc kubenswrapper[4772]: I1128 11:11:01.598433 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 11:11:01 crc kubenswrapper[4772]: I1128 11:11:01.677263 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 11:11:01 crc kubenswrapper[4772]: I1128 11:11:01.742430 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 11:11:02 crc kubenswrapper[4772]: I1128 11:11:02.092737 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 11:11:02 crc kubenswrapper[4772]: I1128 11:11:02.125103 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 11:11:02 crc kubenswrapper[4772]: I1128 11:11:02.259213 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 11:11:02 crc kubenswrapper[4772]: I1128 11:11:02.362404 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 11:11:02 crc kubenswrapper[4772]: I1128 11:11:02.716317 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 11:11:02 crc kubenswrapper[4772]: I1128 11:11:02.726293 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 11:11:02 crc kubenswrapper[4772]: I1128 11:11:02.843544 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 11:11:02 crc kubenswrapper[4772]: I1128 11:11:02.902730 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 11:11:02 crc kubenswrapper[4772]: I1128 11:11:02.911710 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 11:11:02 crc kubenswrapper[4772]: I1128 11:11:02.954961 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.039026 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.140381 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.166669 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.257023 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.653487 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.691031 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.697434 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.749618 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.752766 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.764145 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.795944 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.804604 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.887128 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.912764 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 11:11:03 crc kubenswrapper[4772]: I1128 11:11:03.985305 4772 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.078593 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.418330 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.549724 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.553266 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.562304 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.599046 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.624035 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.627296 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.690602 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.720875 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.728583 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.732000 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.754063 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.791961 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.808013 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.838505 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.896018 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.936518 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 11:11:04 crc kubenswrapper[4772]: I1128 11:11:04.983552 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.044341 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.057976 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.132756 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.217651 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.226510 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.250185 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.252447 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.414985 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.461888 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.579566 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.584538 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.623138 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.765019 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.825565 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.841210 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.855889 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.899751 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 11:11:05 crc kubenswrapper[4772]: I1128 11:11:05.946662 4772 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.099472 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.103056 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.142983 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.207250 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.283469 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.289416 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.304169 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.312653 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.438943 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.507024 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.528173 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.623526 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.625622 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.645962 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.727175 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.733252 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.766010 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.772474 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.788138 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.811316 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.857493 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.924900 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.925045 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 11:11:06 crc kubenswrapper[4772]: I1128 11:11:06.929128 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.012735 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.383410 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.404623 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.411738 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.446401 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.457473 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.472110 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.512983 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.549137 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.550829 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.571640 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.609633 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.624314 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.625073 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.701060 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.734618 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.819864 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.858944 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.909244 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 11:11:07 crc kubenswrapper[4772]: I1128 11:11:07.988988 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.051780 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.067010 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.174666 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.206842 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.231679 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.280083 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.341703 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.375074 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.490915 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.584223 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.628168 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.823743 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.861695 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.896994 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 11:11:08 crc kubenswrapper[4772]: I1128 11:11:08.897041 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.015969 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.036043 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.045789 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.064234 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.089337 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.101872 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.317769 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.353785 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.462510 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.665044 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.800332 4772 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.972286 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.974510 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 11:11:09 crc kubenswrapper[4772]: I1128 11:11:09.984785 4772 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.011887 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.054043 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.215516 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.239558 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.257442 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.267975 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.278795 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.305260 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.313702 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.428875 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.435934 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.568783 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.669917 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.677072 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.685301 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.758784 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.791851 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.835634 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 11:11:10 crc kubenswrapper[4772]: I1128 11:11:10.843187 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.008060 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.127440 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.276471 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.295623 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.590891 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.750109 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.752310 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.756788 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.759038 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.774699 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.779423 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.806755 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.844744 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.893724 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.903403 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 11:11:11 crc kubenswrapper[4772]: I1128 11:11:11.999824 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.063065 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.160997 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.207970 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.277382 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.321352 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.441811 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.570838 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.582842 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.602914 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.621904 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.628249 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.658729 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.735164 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.894890 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 11:11:12 crc kubenswrapper[4772]: I1128 11:11:12.972371 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 11:11:13 crc kubenswrapper[4772]: I1128 11:11:13.168769 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 11:11:13 crc kubenswrapper[4772]: I1128 11:11:13.229893 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 11:11:13 crc kubenswrapper[4772]: I1128 11:11:13.255702 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 11:11:13 crc kubenswrapper[4772]: I1128 11:11:13.324172 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 11:11:13 crc kubenswrapper[4772]: I1128 11:11:13.326943 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 11:11:13 crc kubenswrapper[4772]: I1128 11:11:13.337839 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 11:11:13 crc kubenswrapper[4772]: I1128 11:11:13.486107 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 11:11:13 crc kubenswrapper[4772]: I1128 11:11:13.493566 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 11:11:13 crc kubenswrapper[4772]: I1128 11:11:13.522064 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 11:11:13 crc kubenswrapper[4772]: I1128 11:11:13.528606 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 11:11:13 crc kubenswrapper[4772]: I1128 11:11:13.630174 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 11:11:13 crc kubenswrapper[4772]: I1128 11:11:13.853712 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.076588 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.197723 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.354466 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.364745 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.372523 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.390198 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.537559 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.558278 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.589198 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.703035 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.709206 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.871304 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.874183 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.982460 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 11:11:14 crc kubenswrapper[4772]: I1128 11:11:14.982952 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.061166 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.074681 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.211186 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.456949 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.514810 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.525928 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.645955 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.684178 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.726738 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.754266 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.771013 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.928942 4772 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.931972 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=48.931940331 podStartE2EDuration="48.931940331s" podCreationTimestamp="2025-11-28 11:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:10:50.957051084 +0000 UTC m=+249.280294331" watchObservedRunningTime="2025-11-28 11:11:15.931940331 +0000 UTC m=+274.255183578" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.937785 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.937854 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.938333 4772 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="17176782-0587-4f15-a744-3cb248f523e0" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.938453 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="17176782-0587-4f15-a744-3cb248f523e0" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.944262 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:11:15 crc kubenswrapper[4772]: I1128 11:11:15.957830 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.957815274 podStartE2EDuration="25.957815274s" podCreationTimestamp="2025-11-28 11:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:11:15.954802046 +0000 UTC m=+274.278045273" watchObservedRunningTime="2025-11-28 11:11:15.957815274 +0000 UTC m=+274.281058501" Nov 28 11:11:16 crc kubenswrapper[4772]: I1128 11:11:16.128801 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 11:11:16 crc kubenswrapper[4772]: I1128 11:11:16.312805 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 11:11:16 crc kubenswrapper[4772]: I1128 11:11:16.398217 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 11:11:16 crc kubenswrapper[4772]: I1128 11:11:16.636479 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 11:11:17 crc kubenswrapper[4772]: I1128 11:11:17.318940 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 11:11:17 crc kubenswrapper[4772]: I1128 11:11:17.319901 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 11:11:17 crc kubenswrapper[4772]: I1128 11:11:17.683342 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 11:11:18 crc kubenswrapper[4772]: I1128 11:11:18.577949 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 11:11:23 crc kubenswrapper[4772]: I1128 11:11:23.776044 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 11:11:24 crc kubenswrapper[4772]: I1128 11:11:24.573480 4772 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 11:11:24 crc kubenswrapper[4772]: I1128 11:11:24.573749 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://81784daa6e0aa6871db3b461a7dd136a1003d5cc34fd688870850af1a5609cf3" gracePeriod=5 Nov 28 11:11:25 crc kubenswrapper[4772]: I1128 11:11:25.784840 4772 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 11:11:26 crc kubenswrapper[4772]: I1128 11:11:26.852283 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 11:11:28 crc kubenswrapper[4772]: I1128 11:11:28.252615 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.142054 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.142327 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.186730 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.186980 4772 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="81784daa6e0aa6871db3b461a7dd136a1003d5cc34fd688870850af1a5609cf3" exitCode=137 Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.187083 4772 scope.go:117] "RemoveContainer" containerID="81784daa6e0aa6871db3b461a7dd136a1003d5cc34fd688870850af1a5609cf3" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.187106 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.210371 4772 scope.go:117] "RemoveContainer" containerID="81784daa6e0aa6871db3b461a7dd136a1003d5cc34fd688870850af1a5609cf3" Nov 28 11:11:30 crc kubenswrapper[4772]: E1128 11:11:30.210836 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81784daa6e0aa6871db3b461a7dd136a1003d5cc34fd688870850af1a5609cf3\": container with ID starting with 81784daa6e0aa6871db3b461a7dd136a1003d5cc34fd688870850af1a5609cf3 not found: ID does not exist" containerID="81784daa6e0aa6871db3b461a7dd136a1003d5cc34fd688870850af1a5609cf3" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.210940 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81784daa6e0aa6871db3b461a7dd136a1003d5cc34fd688870850af1a5609cf3"} err="failed to get container status \"81784daa6e0aa6871db3b461a7dd136a1003d5cc34fd688870850af1a5609cf3\": rpc error: code = NotFound desc = could not find container \"81784daa6e0aa6871db3b461a7dd136a1003d5cc34fd688870850af1a5609cf3\": container with ID starting with 81784daa6e0aa6871db3b461a7dd136a1003d5cc34fd688870850af1a5609cf3 not found: ID does not exist" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.265493 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.265804 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.265924 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.266046 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.266144 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.265615 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.265832 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.265974 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.266440 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.266714 4772 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.266788 4772 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.266890 4772 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.266962 4772 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.273652 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.367913 4772 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:30 crc kubenswrapper[4772]: I1128 11:11:30.771495 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.001772 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.002339 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.011767 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.011805 4772 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e67e6f5d-aed2-485c-88fd-4e297a129b57" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.014784 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.014822 4772 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e67e6f5d-aed2-485c-88fd-4e297a129b57" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.751432 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6gqv"] Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.752024 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g6gqv" podUID="7b72c082-4b2a-4357-b95c-51d456028f86" containerName="registry-server" containerID="cri-o://6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37" gracePeriod=30 Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.774855 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dkfw4"] Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.775319 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dkfw4" podUID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" containerName="registry-server" containerID="cri-o://830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a" gracePeriod=30 Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.792158 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rnhdf"] Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.796902 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9j8qq"] Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.797232 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9j8qq" podUID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" containerName="registry-server" containerID="cri-o://b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3" gracePeriod=30 Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.799761 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" podUID="4bf0b541-25cb-4873-b2e2-a9466dfb4ccb" containerName="marketplace-operator" containerID="cri-o://2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f" gracePeriod=30 Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.805476 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mt9sz"] Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.806321 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mt9sz" podUID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" containerName="registry-server" containerID="cri-o://f30c98a8507ababbde14b625bf0dc966e5bf3c63ffd63a961bf82fe3003de637" gracePeriod=30 Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.810283 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4slqp"] Nov 28 11:11:32 crc kubenswrapper[4772]: E1128 11:11:32.810568 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.810583 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 11:11:32 crc kubenswrapper[4772]: E1128 11:11:32.810599 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" containerName="installer" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.810606 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" containerName="installer" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.810694 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.810711 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bcba8f-3490-4f32-9c89-0cabe7bf37cc" containerName="installer" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.814760 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4slqp"] Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.814910 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.897123 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6c49597-5e3b-44ab-9b76-cb54e6c65736-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4slqp\" (UID: \"c6c49597-5e3b-44ab-9b76-cb54e6c65736\") " pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.897523 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh8kc\" (UniqueName: \"kubernetes.io/projected/c6c49597-5e3b-44ab-9b76-cb54e6c65736-kube-api-access-dh8kc\") pod \"marketplace-operator-79b997595-4slqp\" (UID: \"c6c49597-5e3b-44ab-9b76-cb54e6c65736\") " pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.897557 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6c49597-5e3b-44ab-9b76-cb54e6c65736-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4slqp\" (UID: \"c6c49597-5e3b-44ab-9b76-cb54e6c65736\") " pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.998773 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6c49597-5e3b-44ab-9b76-cb54e6c65736-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4slqp\" (UID: \"c6c49597-5e3b-44ab-9b76-cb54e6c65736\") " pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.998835 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh8kc\" (UniqueName: \"kubernetes.io/projected/c6c49597-5e3b-44ab-9b76-cb54e6c65736-kube-api-access-dh8kc\") pod \"marketplace-operator-79b997595-4slqp\" (UID: \"c6c49597-5e3b-44ab-9b76-cb54e6c65736\") " pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:32 crc kubenswrapper[4772]: I1128 11:11:32.998859 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6c49597-5e3b-44ab-9b76-cb54e6c65736-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4slqp\" (UID: \"c6c49597-5e3b-44ab-9b76-cb54e6c65736\") " pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.001542 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6c49597-5e3b-44ab-9b76-cb54e6c65736-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4slqp\" (UID: \"c6c49597-5e3b-44ab-9b76-cb54e6c65736\") " pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.012748 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6c49597-5e3b-44ab-9b76-cb54e6c65736-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4slqp\" (UID: \"c6c49597-5e3b-44ab-9b76-cb54e6c65736\") " pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.019419 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh8kc\" (UniqueName: \"kubernetes.io/projected/c6c49597-5e3b-44ab-9b76-cb54e6c65736-kube-api-access-dh8kc\") pod \"marketplace-operator-79b997595-4slqp\" (UID: \"c6c49597-5e3b-44ab-9b76-cb54e6c65736\") " pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.173090 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.176396 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.182038 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.185612 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.203662 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.214406 4772 generic.go:334] "Generic (PLEG): container finished" podID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" containerID="f30c98a8507ababbde14b625bf0dc966e5bf3c63ffd63a961bf82fe3003de637" exitCode=0 Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.214463 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt9sz" event={"ID":"f0eee58e-18fa-496f-b10f-6e590c7e39c8","Type":"ContainerDied","Data":"f30c98a8507ababbde14b625bf0dc966e5bf3c63ffd63a961bf82fe3003de637"} Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.214486 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt9sz" event={"ID":"f0eee58e-18fa-496f-b10f-6e590c7e39c8","Type":"ContainerDied","Data":"e7bf65ac6860990cacce97e6dfb2acd156b915a3a8c4bfaa49f791f50bc70746"} Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.214499 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7bf65ac6860990cacce97e6dfb2acd156b915a3a8c4bfaa49f791f50bc70746" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.218178 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.221915 4772 generic.go:334] "Generic (PLEG): container finished" podID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" containerID="b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3" exitCode=0 Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.221971 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9j8qq" event={"ID":"3108fd0f-46a1-45ab-b911-7a35f90a9a35","Type":"ContainerDied","Data":"b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3"} Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.221996 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9j8qq" event={"ID":"3108fd0f-46a1-45ab-b911-7a35f90a9a35","Type":"ContainerDied","Data":"7ceb282b8cc410cf346e5b6eb128935bf71b95278be4922509a3560b39700ab1"} Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.222012 4772 scope.go:117] "RemoveContainer" containerID="b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.222112 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9j8qq" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.225228 4772 generic.go:334] "Generic (PLEG): container finished" podID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" containerID="830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a" exitCode=0 Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.225288 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkfw4" event={"ID":"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8","Type":"ContainerDied","Data":"830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a"} Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.225316 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkfw4" event={"ID":"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8","Type":"ContainerDied","Data":"3747b3753f674d2d13c6fc3d8f14a8f9055742d1eb98592e057050a4b638a005"} Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.225408 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkfw4" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.226704 4772 generic.go:334] "Generic (PLEG): container finished" podID="4bf0b541-25cb-4873-b2e2-a9466dfb4ccb" containerID="2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f" exitCode=0 Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.226739 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" event={"ID":"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb","Type":"ContainerDied","Data":"2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f"} Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.226753 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" event={"ID":"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb","Type":"ContainerDied","Data":"1255d5214f68b6c5d78b3af3c8b7d4e23cce3c692011ceea45bb7dc6bbd8e1eb"} Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.226798 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rnhdf" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.237519 4772 generic.go:334] "Generic (PLEG): container finished" podID="7b72c082-4b2a-4357-b95c-51d456028f86" containerID="6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37" exitCode=0 Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.237554 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gqv" event={"ID":"7b72c082-4b2a-4357-b95c-51d456028f86","Type":"ContainerDied","Data":"6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37"} Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.237853 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gqv" event={"ID":"7b72c082-4b2a-4357-b95c-51d456028f86","Type":"ContainerDied","Data":"1244a01a39a35bebf42ebd20d871f94d7985a483492b719e1b419e79b8e75376"} Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.237632 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6gqv" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.242153 4772 scope.go:117] "RemoveContainer" containerID="9062649daa89ff34cce391f9c58488edf89ee6300b41356144587822ddaf0255" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.297200 4772 scope.go:117] "RemoveContainer" containerID="3ea7f434be1a1307e409a26a5a27083a2ffe245e213178d8a0ffa571ff744fab" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306369 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-marketplace-trusted-ca\") pod \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\" (UID: \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306421 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hdqr\" (UniqueName: \"kubernetes.io/projected/3108fd0f-46a1-45ab-b911-7a35f90a9a35-kube-api-access-4hdqr\") pod \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\" (UID: \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306506 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3108fd0f-46a1-45ab-b911-7a35f90a9a35-utilities\") pod \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\" (UID: \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306531 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b72c082-4b2a-4357-b95c-51d456028f86-catalog-content\") pod \"7b72c082-4b2a-4357-b95c-51d456028f86\" (UID: \"7b72c082-4b2a-4357-b95c-51d456028f86\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306556 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b72c082-4b2a-4357-b95c-51d456028f86-utilities\") pod \"7b72c082-4b2a-4357-b95c-51d456028f86\" (UID: \"7b72c082-4b2a-4357-b95c-51d456028f86\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306593 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-catalog-content\") pod \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\" (UID: \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306610 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfbgx\" (UniqueName: \"kubernetes.io/projected/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-kube-api-access-qfbgx\") pod \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\" (UID: \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306629 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz6lt\" (UniqueName: \"kubernetes.io/projected/7b72c082-4b2a-4357-b95c-51d456028f86-kube-api-access-lz6lt\") pod \"7b72c082-4b2a-4357-b95c-51d456028f86\" (UID: \"7b72c082-4b2a-4357-b95c-51d456028f86\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306650 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-marketplace-operator-metrics\") pod \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\" (UID: \"4bf0b541-25cb-4873-b2e2-a9466dfb4ccb\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306669 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0eee58e-18fa-496f-b10f-6e590c7e39c8-utilities\") pod \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\" (UID: \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306695 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvhsn\" (UniqueName: \"kubernetes.io/projected/f0eee58e-18fa-496f-b10f-6e590c7e39c8-kube-api-access-jvhsn\") pod \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\" (UID: \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306717 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-utilities\") pod \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\" (UID: \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306745 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3108fd0f-46a1-45ab-b911-7a35f90a9a35-catalog-content\") pod \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\" (UID: \"3108fd0f-46a1-45ab-b911-7a35f90a9a35\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306785 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxbfz\" (UniqueName: \"kubernetes.io/projected/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-kube-api-access-fxbfz\") pod \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\" (UID: \"bfe29c61-95d6-476a-b23c-9b66f5f1c5f8\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.306806 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0eee58e-18fa-496f-b10f-6e590c7e39c8-catalog-content\") pod \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\" (UID: \"f0eee58e-18fa-496f-b10f-6e590c7e39c8\") " Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.307313 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "4bf0b541-25cb-4873-b2e2-a9466dfb4ccb" (UID: "4bf0b541-25cb-4873-b2e2-a9466dfb4ccb"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.308014 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0eee58e-18fa-496f-b10f-6e590c7e39c8-utilities" (OuterVolumeSpecName: "utilities") pod "f0eee58e-18fa-496f-b10f-6e590c7e39c8" (UID: "f0eee58e-18fa-496f-b10f-6e590c7e39c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.308086 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b72c082-4b2a-4357-b95c-51d456028f86-utilities" (OuterVolumeSpecName: "utilities") pod "7b72c082-4b2a-4357-b95c-51d456028f86" (UID: "7b72c082-4b2a-4357-b95c-51d456028f86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.308400 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3108fd0f-46a1-45ab-b911-7a35f90a9a35-utilities" (OuterVolumeSpecName: "utilities") pod "3108fd0f-46a1-45ab-b911-7a35f90a9a35" (UID: "3108fd0f-46a1-45ab-b911-7a35f90a9a35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.308490 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-utilities" (OuterVolumeSpecName: "utilities") pod "bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" (UID: "bfe29c61-95d6-476a-b23c-9b66f5f1c5f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.310810 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0eee58e-18fa-496f-b10f-6e590c7e39c8-kube-api-access-jvhsn" (OuterVolumeSpecName: "kube-api-access-jvhsn") pod "f0eee58e-18fa-496f-b10f-6e590c7e39c8" (UID: "f0eee58e-18fa-496f-b10f-6e590c7e39c8"). InnerVolumeSpecName "kube-api-access-jvhsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.312623 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b72c082-4b2a-4357-b95c-51d456028f86-kube-api-access-lz6lt" (OuterVolumeSpecName: "kube-api-access-lz6lt") pod "7b72c082-4b2a-4357-b95c-51d456028f86" (UID: "7b72c082-4b2a-4357-b95c-51d456028f86"). InnerVolumeSpecName "kube-api-access-lz6lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.313452 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-kube-api-access-fxbfz" (OuterVolumeSpecName: "kube-api-access-fxbfz") pod "bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" (UID: "bfe29c61-95d6-476a-b23c-9b66f5f1c5f8"). InnerVolumeSpecName "kube-api-access-fxbfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.326257 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3108fd0f-46a1-45ab-b911-7a35f90a9a35-kube-api-access-4hdqr" (OuterVolumeSpecName: "kube-api-access-4hdqr") pod "3108fd0f-46a1-45ab-b911-7a35f90a9a35" (UID: "3108fd0f-46a1-45ab-b911-7a35f90a9a35"). InnerVolumeSpecName "kube-api-access-4hdqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.326598 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.327171 4772 scope.go:117] "RemoveContainer" containerID="b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3" Nov 28 11:11:33 crc kubenswrapper[4772]: E1128 11:11:33.327677 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3\": container with ID starting with b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3 not found: ID does not exist" containerID="b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.327714 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3"} err="failed to get container status \"b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3\": rpc error: code = NotFound desc = could not find container \"b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3\": container with ID starting with b991c8e72aebe8aae30978df38e8f47491ad26267df82018376d1a86fd34c0d3 not found: ID does not exist" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.327738 4772 scope.go:117] "RemoveContainer" containerID="9062649daa89ff34cce391f9c58488edf89ee6300b41356144587822ddaf0255" Nov 28 11:11:33 crc kubenswrapper[4772]: E1128 11:11:33.328130 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9062649daa89ff34cce391f9c58488edf89ee6300b41356144587822ddaf0255\": container with ID starting with 9062649daa89ff34cce391f9c58488edf89ee6300b41356144587822ddaf0255 not found: ID does not exist" containerID="9062649daa89ff34cce391f9c58488edf89ee6300b41356144587822ddaf0255" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.328144 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "4bf0b541-25cb-4873-b2e2-a9466dfb4ccb" (UID: "4bf0b541-25cb-4873-b2e2-a9466dfb4ccb"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.328164 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9062649daa89ff34cce391f9c58488edf89ee6300b41356144587822ddaf0255"} err="failed to get container status \"9062649daa89ff34cce391f9c58488edf89ee6300b41356144587822ddaf0255\": rpc error: code = NotFound desc = could not find container \"9062649daa89ff34cce391f9c58488edf89ee6300b41356144587822ddaf0255\": container with ID starting with 9062649daa89ff34cce391f9c58488edf89ee6300b41356144587822ddaf0255 not found: ID does not exist" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.328183 4772 scope.go:117] "RemoveContainer" containerID="3ea7f434be1a1307e409a26a5a27083a2ffe245e213178d8a0ffa571ff744fab" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.328380 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-kube-api-access-qfbgx" (OuterVolumeSpecName: "kube-api-access-qfbgx") pod "4bf0b541-25cb-4873-b2e2-a9466dfb4ccb" (UID: "4bf0b541-25cb-4873-b2e2-a9466dfb4ccb"). InnerVolumeSpecName "kube-api-access-qfbgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: E1128 11:11:33.328443 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea7f434be1a1307e409a26a5a27083a2ffe245e213178d8a0ffa571ff744fab\": container with ID starting with 3ea7f434be1a1307e409a26a5a27083a2ffe245e213178d8a0ffa571ff744fab not found: ID does not exist" containerID="3ea7f434be1a1307e409a26a5a27083a2ffe245e213178d8a0ffa571ff744fab" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.328474 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea7f434be1a1307e409a26a5a27083a2ffe245e213178d8a0ffa571ff744fab"} err="failed to get container status \"3ea7f434be1a1307e409a26a5a27083a2ffe245e213178d8a0ffa571ff744fab\": rpc error: code = NotFound desc = could not find container \"3ea7f434be1a1307e409a26a5a27083a2ffe245e213178d8a0ffa571ff744fab\": container with ID starting with 3ea7f434be1a1307e409a26a5a27083a2ffe245e213178d8a0ffa571ff744fab not found: ID does not exist" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.329453 4772 scope.go:117] "RemoveContainer" containerID="830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.332876 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3108fd0f-46a1-45ab-b911-7a35f90a9a35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3108fd0f-46a1-45ab-b911-7a35f90a9a35" (UID: "3108fd0f-46a1-45ab-b911-7a35f90a9a35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.345885 4772 scope.go:117] "RemoveContainer" containerID="c0f14c8a77d843f37c9a67f50f4f67adfbe5fef575c8f5e7ce7a1027a0f5ef24" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.366966 4772 scope.go:117] "RemoveContainer" containerID="fbb2b792a6a7d0e4bb8a0b27ccaf7b24bdf13850daa8e6e951529f5762fe5c7a" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.369894 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b72c082-4b2a-4357-b95c-51d456028f86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b72c082-4b2a-4357-b95c-51d456028f86" (UID: "7b72c082-4b2a-4357-b95c-51d456028f86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.383559 4772 scope.go:117] "RemoveContainer" containerID="830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a" Nov 28 11:11:33 crc kubenswrapper[4772]: E1128 11:11:33.384081 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a\": container with ID starting with 830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a not found: ID does not exist" containerID="830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.384131 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a"} err="failed to get container status \"830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a\": rpc error: code = NotFound desc = could not find container \"830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a\": container with ID starting with 830c536e3f526f8b98342d0d6439fbdf52935e108503e31fdb65da364e23741a not found: ID does not exist" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.384160 4772 scope.go:117] "RemoveContainer" containerID="c0f14c8a77d843f37c9a67f50f4f67adfbe5fef575c8f5e7ce7a1027a0f5ef24" Nov 28 11:11:33 crc kubenswrapper[4772]: E1128 11:11:33.384505 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0f14c8a77d843f37c9a67f50f4f67adfbe5fef575c8f5e7ce7a1027a0f5ef24\": container with ID starting with c0f14c8a77d843f37c9a67f50f4f67adfbe5fef575c8f5e7ce7a1027a0f5ef24 not found: ID does not exist" containerID="c0f14c8a77d843f37c9a67f50f4f67adfbe5fef575c8f5e7ce7a1027a0f5ef24" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.384535 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0f14c8a77d843f37c9a67f50f4f67adfbe5fef575c8f5e7ce7a1027a0f5ef24"} err="failed to get container status \"c0f14c8a77d843f37c9a67f50f4f67adfbe5fef575c8f5e7ce7a1027a0f5ef24\": rpc error: code = NotFound desc = could not find container \"c0f14c8a77d843f37c9a67f50f4f67adfbe5fef575c8f5e7ce7a1027a0f5ef24\": container with ID starting with c0f14c8a77d843f37c9a67f50f4f67adfbe5fef575c8f5e7ce7a1027a0f5ef24 not found: ID does not exist" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.384563 4772 scope.go:117] "RemoveContainer" containerID="fbb2b792a6a7d0e4bb8a0b27ccaf7b24bdf13850daa8e6e951529f5762fe5c7a" Nov 28 11:11:33 crc kubenswrapper[4772]: E1128 11:11:33.384866 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb2b792a6a7d0e4bb8a0b27ccaf7b24bdf13850daa8e6e951529f5762fe5c7a\": container with ID starting with fbb2b792a6a7d0e4bb8a0b27ccaf7b24bdf13850daa8e6e951529f5762fe5c7a not found: ID does not exist" containerID="fbb2b792a6a7d0e4bb8a0b27ccaf7b24bdf13850daa8e6e951529f5762fe5c7a" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.384895 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb2b792a6a7d0e4bb8a0b27ccaf7b24bdf13850daa8e6e951529f5762fe5c7a"} err="failed to get container status \"fbb2b792a6a7d0e4bb8a0b27ccaf7b24bdf13850daa8e6e951529f5762fe5c7a\": rpc error: code = NotFound desc = could not find container \"fbb2b792a6a7d0e4bb8a0b27ccaf7b24bdf13850daa8e6e951529f5762fe5c7a\": container with ID starting with fbb2b792a6a7d0e4bb8a0b27ccaf7b24bdf13850daa8e6e951529f5762fe5c7a not found: ID does not exist" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.384910 4772 scope.go:117] "RemoveContainer" containerID="2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.385224 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" (UID: "bfe29c61-95d6-476a-b23c-9b66f5f1c5f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.399003 4772 scope.go:117] "RemoveContainer" containerID="2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f" Nov 28 11:11:33 crc kubenswrapper[4772]: E1128 11:11:33.399495 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f\": container with ID starting with 2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f not found: ID does not exist" containerID="2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.399532 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f"} err="failed to get container status \"2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f\": rpc error: code = NotFound desc = could not find container \"2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f\": container with ID starting with 2dcb169f61c0d68d9ae4a89ff12857a51bcd6c625c5f02668f075f68c76b4a9f not found: ID does not exist" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.399553 4772 scope.go:117] "RemoveContainer" containerID="6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408102 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3108fd0f-46a1-45ab-b911-7a35f90a9a35-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408130 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b72c082-4b2a-4357-b95c-51d456028f86-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408143 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b72c082-4b2a-4357-b95c-51d456028f86-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408156 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408170 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfbgx\" (UniqueName: \"kubernetes.io/projected/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-kube-api-access-qfbgx\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408182 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz6lt\" (UniqueName: \"kubernetes.io/projected/7b72c082-4b2a-4357-b95c-51d456028f86-kube-api-access-lz6lt\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408194 4772 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408207 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0eee58e-18fa-496f-b10f-6e590c7e39c8-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408218 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvhsn\" (UniqueName: \"kubernetes.io/projected/f0eee58e-18fa-496f-b10f-6e590c7e39c8-kube-api-access-jvhsn\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408230 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408244 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3108fd0f-46a1-45ab-b911-7a35f90a9a35-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408254 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxbfz\" (UniqueName: \"kubernetes.io/projected/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8-kube-api-access-fxbfz\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408265 4772 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.408277 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hdqr\" (UniqueName: \"kubernetes.io/projected/3108fd0f-46a1-45ab-b911-7a35f90a9a35-kube-api-access-4hdqr\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.411443 4772 scope.go:117] "RemoveContainer" containerID="8300abfbb49037646793b2e3d0cb65e85643aac40d9624499e404e9984984c2c" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.423204 4772 scope.go:117] "RemoveContainer" containerID="5ba92b670c233614bbc8e8efe45f2defadf0fa9c3ac4d3373780f0e83c3b019a" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.435495 4772 scope.go:117] "RemoveContainer" containerID="6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37" Nov 28 11:11:33 crc kubenswrapper[4772]: E1128 11:11:33.435951 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37\": container with ID starting with 6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37 not found: ID does not exist" containerID="6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.436008 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37"} err="failed to get container status \"6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37\": rpc error: code = NotFound desc = could not find container \"6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37\": container with ID starting with 6660694e0a236af133afdcbb89cde8bd925623de19572b4c9a63c75218a4ea37 not found: ID does not exist" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.436046 4772 scope.go:117] "RemoveContainer" containerID="8300abfbb49037646793b2e3d0cb65e85643aac40d9624499e404e9984984c2c" Nov 28 11:11:33 crc kubenswrapper[4772]: E1128 11:11:33.436463 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8300abfbb49037646793b2e3d0cb65e85643aac40d9624499e404e9984984c2c\": container with ID starting with 8300abfbb49037646793b2e3d0cb65e85643aac40d9624499e404e9984984c2c not found: ID does not exist" containerID="8300abfbb49037646793b2e3d0cb65e85643aac40d9624499e404e9984984c2c" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.436507 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8300abfbb49037646793b2e3d0cb65e85643aac40d9624499e404e9984984c2c"} err="failed to get container status \"8300abfbb49037646793b2e3d0cb65e85643aac40d9624499e404e9984984c2c\": rpc error: code = NotFound desc = could not find container \"8300abfbb49037646793b2e3d0cb65e85643aac40d9624499e404e9984984c2c\": container with ID starting with 8300abfbb49037646793b2e3d0cb65e85643aac40d9624499e404e9984984c2c not found: ID does not exist" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.436536 4772 scope.go:117] "RemoveContainer" containerID="5ba92b670c233614bbc8e8efe45f2defadf0fa9c3ac4d3373780f0e83c3b019a" Nov 28 11:11:33 crc kubenswrapper[4772]: E1128 11:11:33.437758 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ba92b670c233614bbc8e8efe45f2defadf0fa9c3ac4d3373780f0e83c3b019a\": container with ID starting with 5ba92b670c233614bbc8e8efe45f2defadf0fa9c3ac4d3373780f0e83c3b019a not found: ID does not exist" containerID="5ba92b670c233614bbc8e8efe45f2defadf0fa9c3ac4d3373780f0e83c3b019a" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.437788 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba92b670c233614bbc8e8efe45f2defadf0fa9c3ac4d3373780f0e83c3b019a"} err="failed to get container status \"5ba92b670c233614bbc8e8efe45f2defadf0fa9c3ac4d3373780f0e83c3b019a\": rpc error: code = NotFound desc = could not find container \"5ba92b670c233614bbc8e8efe45f2defadf0fa9c3ac4d3373780f0e83c3b019a\": container with ID starting with 5ba92b670c233614bbc8e8efe45f2defadf0fa9c3ac4d3373780f0e83c3b019a not found: ID does not exist" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.438688 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0eee58e-18fa-496f-b10f-6e590c7e39c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0eee58e-18fa-496f-b10f-6e590c7e39c8" (UID: "f0eee58e-18fa-496f-b10f-6e590c7e39c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.508910 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0eee58e-18fa-496f-b10f-6e590c7e39c8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.558149 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rnhdf"] Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.561705 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rnhdf"] Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.568452 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9j8qq"] Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.573185 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9j8qq"] Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.576259 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dkfw4"] Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.582484 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dkfw4"] Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.589559 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6gqv"] Nov 28 11:11:33 crc kubenswrapper[4772]: I1128 11:11:33.595550 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g6gqv"] Nov 28 11:11:34 crc kubenswrapper[4772]: I1128 11:11:34.000486 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" path="/var/lib/kubelet/pods/3108fd0f-46a1-45ab-b911-7a35f90a9a35/volumes" Nov 28 11:11:34 crc kubenswrapper[4772]: I1128 11:11:34.001674 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf0b541-25cb-4873-b2e2-a9466dfb4ccb" path="/var/lib/kubelet/pods/4bf0b541-25cb-4873-b2e2-a9466dfb4ccb/volumes" Nov 28 11:11:34 crc kubenswrapper[4772]: I1128 11:11:34.002258 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b72c082-4b2a-4357-b95c-51d456028f86" path="/var/lib/kubelet/pods/7b72c082-4b2a-4357-b95c-51d456028f86/volumes" Nov 28 11:11:34 crc kubenswrapper[4772]: I1128 11:11:34.003462 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" path="/var/lib/kubelet/pods/bfe29c61-95d6-476a-b23c-9b66f5f1c5f8/volumes" Nov 28 11:11:34 crc kubenswrapper[4772]: I1128 11:11:34.246751 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mt9sz" Nov 28 11:11:34 crc kubenswrapper[4772]: I1128 11:11:34.268522 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mt9sz"] Nov 28 11:11:34 crc kubenswrapper[4772]: I1128 11:11:34.272745 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mt9sz"] Nov 28 11:11:34 crc kubenswrapper[4772]: I1128 11:11:34.866128 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 11:11:36 crc kubenswrapper[4772]: I1128 11:11:36.003191 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" path="/var/lib/kubelet/pods/f0eee58e-18fa-496f-b10f-6e590c7e39c8/volumes" Nov 28 11:11:36 crc kubenswrapper[4772]: E1128 11:11:36.026510 4772 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 28 11:11:36 crc kubenswrapper[4772]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-4slqp_openshift-marketplace_c6c49597-5e3b-44ab-9b76-cb54e6c65736_0(e6ba7c5998efceefb90d5404c7c5448f3e0c5f03125955614343b9cbdeae528f): error adding pod openshift-marketplace_marketplace-operator-79b997595-4slqp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e6ba7c5998efceefb90d5404c7c5448f3e0c5f03125955614343b9cbdeae528f" Netns:"/var/run/netns/2338a4a6-733a-4995-ba7f-82b41f6d60c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-4slqp;K8S_POD_INFRA_CONTAINER_ID=e6ba7c5998efceefb90d5404c7c5448f3e0c5f03125955614343b9cbdeae528f;K8S_POD_UID=c6c49597-5e3b-44ab-9b76-cb54e6c65736" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-4slqp] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-4slqp/c6c49597-5e3b-44ab-9b76-cb54e6c65736]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod marketplace-operator-79b997595-4slqp in out of cluster comm: pod "marketplace-operator-79b997595-4slqp" not found Nov 28 11:11:36 crc kubenswrapper[4772]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 28 11:11:36 crc kubenswrapper[4772]: > Nov 28 11:11:36 crc kubenswrapper[4772]: E1128 11:11:36.026594 4772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 28 11:11:36 crc kubenswrapper[4772]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-4slqp_openshift-marketplace_c6c49597-5e3b-44ab-9b76-cb54e6c65736_0(e6ba7c5998efceefb90d5404c7c5448f3e0c5f03125955614343b9cbdeae528f): error adding pod openshift-marketplace_marketplace-operator-79b997595-4slqp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e6ba7c5998efceefb90d5404c7c5448f3e0c5f03125955614343b9cbdeae528f" Netns:"/var/run/netns/2338a4a6-733a-4995-ba7f-82b41f6d60c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-4slqp;K8S_POD_INFRA_CONTAINER_ID=e6ba7c5998efceefb90d5404c7c5448f3e0c5f03125955614343b9cbdeae528f;K8S_POD_UID=c6c49597-5e3b-44ab-9b76-cb54e6c65736" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-4slqp] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-4slqp/c6c49597-5e3b-44ab-9b76-cb54e6c65736]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod marketplace-operator-79b997595-4slqp in out of cluster comm: pod "marketplace-operator-79b997595-4slqp" not found Nov 28 11:11:36 crc kubenswrapper[4772]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 28 11:11:36 crc kubenswrapper[4772]: > pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:36 crc kubenswrapper[4772]: E1128 11:11:36.026618 4772 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 28 11:11:36 crc kubenswrapper[4772]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-4slqp_openshift-marketplace_c6c49597-5e3b-44ab-9b76-cb54e6c65736_0(e6ba7c5998efceefb90d5404c7c5448f3e0c5f03125955614343b9cbdeae528f): error adding pod openshift-marketplace_marketplace-operator-79b997595-4slqp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e6ba7c5998efceefb90d5404c7c5448f3e0c5f03125955614343b9cbdeae528f" Netns:"/var/run/netns/2338a4a6-733a-4995-ba7f-82b41f6d60c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-4slqp;K8S_POD_INFRA_CONTAINER_ID=e6ba7c5998efceefb90d5404c7c5448f3e0c5f03125955614343b9cbdeae528f;K8S_POD_UID=c6c49597-5e3b-44ab-9b76-cb54e6c65736" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-4slqp] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-4slqp/c6c49597-5e3b-44ab-9b76-cb54e6c65736]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod marketplace-operator-79b997595-4slqp in out of cluster comm: pod "marketplace-operator-79b997595-4slqp" not found Nov 28 11:11:36 crc kubenswrapper[4772]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 28 11:11:36 crc kubenswrapper[4772]: > pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:36 crc kubenswrapper[4772]: E1128 11:11:36.026690 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-79b997595-4slqp_openshift-marketplace(c6c49597-5e3b-44ab-9b76-cb54e6c65736)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-79b997595-4slqp_openshift-marketplace(c6c49597-5e3b-44ab-9b76-cb54e6c65736)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-4slqp_openshift-marketplace_c6c49597-5e3b-44ab-9b76-cb54e6c65736_0(e6ba7c5998efceefb90d5404c7c5448f3e0c5f03125955614343b9cbdeae528f): error adding pod openshift-marketplace_marketplace-operator-79b997595-4slqp to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"e6ba7c5998efceefb90d5404c7c5448f3e0c5f03125955614343b9cbdeae528f\\\" Netns:\\\"/var/run/netns/2338a4a6-733a-4995-ba7f-82b41f6d60c4\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-4slqp;K8S_POD_INFRA_CONTAINER_ID=e6ba7c5998efceefb90d5404c7c5448f3e0c5f03125955614343b9cbdeae528f;K8S_POD_UID=c6c49597-5e3b-44ab-9b76-cb54e6c65736\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-4slqp] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-4slqp/c6c49597-5e3b-44ab-9b76-cb54e6c65736]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod marketplace-operator-79b997595-4slqp in out of cluster comm: pod \\\"marketplace-operator-79b997595-4slqp\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" podUID="c6c49597-5e3b-44ab-9b76-cb54e6c65736" Nov 28 11:11:41 crc kubenswrapper[4772]: I1128 11:11:41.847497 4772 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 28 11:11:43 crc kubenswrapper[4772]: I1128 11:11:43.643242 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 11:11:43 crc kubenswrapper[4772]: I1128 11:11:43.982431 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 11:11:44 crc kubenswrapper[4772]: I1128 11:11:44.163535 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 11:11:46 crc kubenswrapper[4772]: I1128 11:11:46.993887 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:46 crc kubenswrapper[4772]: I1128 11:11:46.994742 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:47 crc kubenswrapper[4772]: I1128 11:11:47.161711 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4slqp"] Nov 28 11:11:47 crc kubenswrapper[4772]: I1128 11:11:47.310199 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" event={"ID":"c6c49597-5e3b-44ab-9b76-cb54e6c65736","Type":"ContainerStarted","Data":"24c8a7c51291ed5fef937265f12d905216121e945aae12ee80013ce45e46fc0d"} Nov 28 11:11:47 crc kubenswrapper[4772]: I1128 11:11:47.310253 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" event={"ID":"c6c49597-5e3b-44ab-9b76-cb54e6c65736","Type":"ContainerStarted","Data":"7c6768b5e6df1b044a793ef41f3530ab6114c54c5c3815fbb09e5373d9391122"} Nov 28 11:11:47 crc kubenswrapper[4772]: I1128 11:11:47.310463 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:47 crc kubenswrapper[4772]: I1128 11:11:47.311564 4772 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4slqp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Nov 28 11:11:47 crc kubenswrapper[4772]: I1128 11:11:47.311620 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" podUID="c6c49597-5e3b-44ab-9b76-cb54e6c65736" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Nov 28 11:11:47 crc kubenswrapper[4772]: I1128 11:11:47.328513 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" podStartSLOduration=15.328494223 podStartE2EDuration="15.328494223s" podCreationTimestamp="2025-11-28 11:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:11:47.327766664 +0000 UTC m=+305.651009891" watchObservedRunningTime="2025-11-28 11:11:47.328494223 +0000 UTC m=+305.651737450" Nov 28 11:11:48 crc kubenswrapper[4772]: I1128 11:11:48.316764 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4slqp" Nov 28 11:11:49 crc kubenswrapper[4772]: I1128 11:11:49.326015 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.032920 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l8xps"] Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.155808 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft"] Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.156018 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" podUID="a94fd25b-ca6a-4afe-922a-61ebfba248ed" containerName="route-controller-manager" containerID="cri-o://ce50e0e6a6ae7d13b905c7874b0b6765f8231a81c801c19576eb90edbf5f3eff" gracePeriod=30 Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.329090 4772 generic.go:334] "Generic (PLEG): container finished" podID="a94fd25b-ca6a-4afe-922a-61ebfba248ed" containerID="ce50e0e6a6ae7d13b905c7874b0b6765f8231a81c801c19576eb90edbf5f3eff" exitCode=0 Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.329204 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" event={"ID":"a94fd25b-ca6a-4afe-922a-61ebfba248ed","Type":"ContainerDied","Data":"ce50e0e6a6ae7d13b905c7874b0b6765f8231a81c801c19576eb90edbf5f3eff"} Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.329505 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" podUID="e836396e-bc34-4178-aa3e-94ce5799b2fa" containerName="controller-manager" containerID="cri-o://8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874" gracePeriod=30 Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.515464 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.621012 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a94fd25b-ca6a-4afe-922a-61ebfba248ed-config\") pod \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.621078 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a94fd25b-ca6a-4afe-922a-61ebfba248ed-serving-cert\") pod \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.621141 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqd2f\" (UniqueName: \"kubernetes.io/projected/a94fd25b-ca6a-4afe-922a-61ebfba248ed-kube-api-access-xqd2f\") pod \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.621193 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a94fd25b-ca6a-4afe-922a-61ebfba248ed-client-ca\") pod \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\" (UID: \"a94fd25b-ca6a-4afe-922a-61ebfba248ed\") " Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.621971 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a94fd25b-ca6a-4afe-922a-61ebfba248ed-config" (OuterVolumeSpecName: "config") pod "a94fd25b-ca6a-4afe-922a-61ebfba248ed" (UID: "a94fd25b-ca6a-4afe-922a-61ebfba248ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.622082 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a94fd25b-ca6a-4afe-922a-61ebfba248ed-client-ca" (OuterVolumeSpecName: "client-ca") pod "a94fd25b-ca6a-4afe-922a-61ebfba248ed" (UID: "a94fd25b-ca6a-4afe-922a-61ebfba248ed"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.629586 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a94fd25b-ca6a-4afe-922a-61ebfba248ed-kube-api-access-xqd2f" (OuterVolumeSpecName: "kube-api-access-xqd2f") pod "a94fd25b-ca6a-4afe-922a-61ebfba248ed" (UID: "a94fd25b-ca6a-4afe-922a-61ebfba248ed"). InnerVolumeSpecName "kube-api-access-xqd2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.632927 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a94fd25b-ca6a-4afe-922a-61ebfba248ed-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a94fd25b-ca6a-4afe-922a-61ebfba248ed" (UID: "a94fd25b-ca6a-4afe-922a-61ebfba248ed"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.653743 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.722402 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e836396e-bc34-4178-aa3e-94ce5799b2fa-serving-cert\") pod \"e836396e-bc34-4178-aa3e-94ce5799b2fa\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.722471 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltddk\" (UniqueName: \"kubernetes.io/projected/e836396e-bc34-4178-aa3e-94ce5799b2fa-kube-api-access-ltddk\") pod \"e836396e-bc34-4178-aa3e-94ce5799b2fa\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.722497 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-client-ca\") pod \"e836396e-bc34-4178-aa3e-94ce5799b2fa\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.722525 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-config\") pod \"e836396e-bc34-4178-aa3e-94ce5799b2fa\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.722579 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-proxy-ca-bundles\") pod \"e836396e-bc34-4178-aa3e-94ce5799b2fa\" (UID: \"e836396e-bc34-4178-aa3e-94ce5799b2fa\") " Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.722752 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a94fd25b-ca6a-4afe-922a-61ebfba248ed-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.722764 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a94fd25b-ca6a-4afe-922a-61ebfba248ed-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.722775 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqd2f\" (UniqueName: \"kubernetes.io/projected/a94fd25b-ca6a-4afe-922a-61ebfba248ed-kube-api-access-xqd2f\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.722787 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a94fd25b-ca6a-4afe-922a-61ebfba248ed-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.723373 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e836396e-bc34-4178-aa3e-94ce5799b2fa" (UID: "e836396e-bc34-4178-aa3e-94ce5799b2fa"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.723390 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-client-ca" (OuterVolumeSpecName: "client-ca") pod "e836396e-bc34-4178-aa3e-94ce5799b2fa" (UID: "e836396e-bc34-4178-aa3e-94ce5799b2fa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.723627 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-config" (OuterVolumeSpecName: "config") pod "e836396e-bc34-4178-aa3e-94ce5799b2fa" (UID: "e836396e-bc34-4178-aa3e-94ce5799b2fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.725776 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e836396e-bc34-4178-aa3e-94ce5799b2fa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e836396e-bc34-4178-aa3e-94ce5799b2fa" (UID: "e836396e-bc34-4178-aa3e-94ce5799b2fa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.725814 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e836396e-bc34-4178-aa3e-94ce5799b2fa-kube-api-access-ltddk" (OuterVolumeSpecName: "kube-api-access-ltddk") pod "e836396e-bc34-4178-aa3e-94ce5799b2fa" (UID: "e836396e-bc34-4178-aa3e-94ce5799b2fa"). InnerVolumeSpecName "kube-api-access-ltddk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.823862 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.823954 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e836396e-bc34-4178-aa3e-94ce5799b2fa-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.823966 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltddk\" (UniqueName: \"kubernetes.io/projected/e836396e-bc34-4178-aa3e-94ce5799b2fa-kube-api-access-ltddk\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.823979 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:51 crc kubenswrapper[4772]: I1128 11:11:51.824009 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e836396e-bc34-4178-aa3e-94ce5799b2fa-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.334914 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" event={"ID":"a94fd25b-ca6a-4afe-922a-61ebfba248ed","Type":"ContainerDied","Data":"77ced7fab3d5a538d86efbdc09b3e1ccfb3dca350d8f05396687b94a34261f0d"} Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.334925 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.334964 4772 scope.go:117] "RemoveContainer" containerID="ce50e0e6a6ae7d13b905c7874b0b6765f8231a81c801c19576eb90edbf5f3eff" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.338434 4772 generic.go:334] "Generic (PLEG): container finished" podID="e836396e-bc34-4178-aa3e-94ce5799b2fa" containerID="8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874" exitCode=0 Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.338478 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.338486 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" event={"ID":"e836396e-bc34-4178-aa3e-94ce5799b2fa","Type":"ContainerDied","Data":"8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874"} Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.338520 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l8xps" event={"ID":"e836396e-bc34-4178-aa3e-94ce5799b2fa","Type":"ContainerDied","Data":"8512d1fb632895640763e3422cb2325cfdaff1cdb34407828c8d5068eaccce6d"} Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.355320 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft"] Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.356453 4772 scope.go:117] "RemoveContainer" containerID="8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.359148 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfkft"] Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.368069 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l8xps"] Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.372068 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l8xps"] Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.374233 4772 scope.go:117] "RemoveContainer" containerID="8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.374755 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874\": container with ID starting with 8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874 not found: ID does not exist" containerID="8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.374797 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874"} err="failed to get container status \"8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874\": rpc error: code = NotFound desc = could not find container \"8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874\": container with ID starting with 8aee7676751ff57fb616ad64cd13103c10a00a3bdc529f12c58d5ea525729874 not found: ID does not exist" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406472 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9"] Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406657 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" containerName="registry-server" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406668 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" containerName="registry-server" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406678 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94fd25b-ca6a-4afe-922a-61ebfba248ed" containerName="route-controller-manager" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406684 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94fd25b-ca6a-4afe-922a-61ebfba248ed" containerName="route-controller-manager" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406693 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf0b541-25cb-4873-b2e2-a9466dfb4ccb" containerName="marketplace-operator" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406699 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf0b541-25cb-4873-b2e2-a9466dfb4ccb" containerName="marketplace-operator" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406707 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" containerName="registry-server" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406712 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" containerName="registry-server" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406719 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" containerName="registry-server" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406725 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" containerName="registry-server" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406733 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" containerName="extract-utilities" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406739 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" containerName="extract-utilities" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406747 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" containerName="extract-content" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406752 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" containerName="extract-content" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406760 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" containerName="extract-utilities" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406766 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" containerName="extract-utilities" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406773 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e836396e-bc34-4178-aa3e-94ce5799b2fa" containerName="controller-manager" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406779 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e836396e-bc34-4178-aa3e-94ce5799b2fa" containerName="controller-manager" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406787 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b72c082-4b2a-4357-b95c-51d456028f86" containerName="registry-server" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406792 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b72c082-4b2a-4357-b95c-51d456028f86" containerName="registry-server" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406799 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" containerName="extract-content" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406804 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" containerName="extract-content" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406814 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b72c082-4b2a-4357-b95c-51d456028f86" containerName="extract-content" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406820 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b72c082-4b2a-4357-b95c-51d456028f86" containerName="extract-content" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406830 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" containerName="extract-content" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406837 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" containerName="extract-content" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406848 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b72c082-4b2a-4357-b95c-51d456028f86" containerName="extract-utilities" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406855 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b72c082-4b2a-4357-b95c-51d456028f86" containerName="extract-utilities" Nov 28 11:11:52 crc kubenswrapper[4772]: E1128 11:11:52.406863 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" containerName="extract-utilities" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406870 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" containerName="extract-utilities" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406952 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe29c61-95d6-476a-b23c-9b66f5f1c5f8" containerName="registry-server" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406962 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3108fd0f-46a1-45ab-b911-7a35f90a9a35" containerName="registry-server" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406970 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0eee58e-18fa-496f-b10f-6e590c7e39c8" containerName="registry-server" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406978 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a94fd25b-ca6a-4afe-922a-61ebfba248ed" containerName="route-controller-manager" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406987 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b72c082-4b2a-4357-b95c-51d456028f86" containerName="registry-server" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.406993 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e836396e-bc34-4178-aa3e-94ce5799b2fa" containerName="controller-manager" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.407000 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf0b541-25cb-4873-b2e2-a9466dfb4ccb" containerName="marketplace-operator" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.407309 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.410288 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.411174 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.411281 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.411386 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.411637 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.412006 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-lzxzh"] Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.412897 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.414306 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.418244 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9"] Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.419389 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.419599 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.420131 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.420588 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.421144 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.422593 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.422921 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-lzxzh"] Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.430205 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.535451 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-722xg\" (UniqueName: \"kubernetes.io/projected/8a3a2f5f-705d-475d-bb23-10aebf2fe997-kube-api-access-722xg\") pod \"route-controller-manager-5f7f4579db-nrfv9\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.535556 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxhgn\" (UniqueName: \"kubernetes.io/projected/76138317-b692-454f-abaa-35d943d9ccb2-kube-api-access-fxhgn\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.535585 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-proxy-ca-bundles\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.535609 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76138317-b692-454f-abaa-35d943d9ccb2-serving-cert\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.535630 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3a2f5f-705d-475d-bb23-10aebf2fe997-serving-cert\") pod \"route-controller-manager-5f7f4579db-nrfv9\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.535659 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3a2f5f-705d-475d-bb23-10aebf2fe997-config\") pod \"route-controller-manager-5f7f4579db-nrfv9\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.535686 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-config\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.535713 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a3a2f5f-705d-475d-bb23-10aebf2fe997-client-ca\") pod \"route-controller-manager-5f7f4579db-nrfv9\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.535739 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-client-ca\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.637018 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a3a2f5f-705d-475d-bb23-10aebf2fe997-client-ca\") pod \"route-controller-manager-5f7f4579db-nrfv9\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.637085 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-client-ca\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.637128 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-722xg\" (UniqueName: \"kubernetes.io/projected/8a3a2f5f-705d-475d-bb23-10aebf2fe997-kube-api-access-722xg\") pod \"route-controller-manager-5f7f4579db-nrfv9\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.637165 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxhgn\" (UniqueName: \"kubernetes.io/projected/76138317-b692-454f-abaa-35d943d9ccb2-kube-api-access-fxhgn\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.637191 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-proxy-ca-bundles\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.637219 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76138317-b692-454f-abaa-35d943d9ccb2-serving-cert\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.637243 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3a2f5f-705d-475d-bb23-10aebf2fe997-serving-cert\") pod \"route-controller-manager-5f7f4579db-nrfv9\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.637896 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3a2f5f-705d-475d-bb23-10aebf2fe997-config\") pod \"route-controller-manager-5f7f4579db-nrfv9\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.637932 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-config\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.638118 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-client-ca\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.638186 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-proxy-ca-bundles\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.638831 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a3a2f5f-705d-475d-bb23-10aebf2fe997-client-ca\") pod \"route-controller-manager-5f7f4579db-nrfv9\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.639287 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3a2f5f-705d-475d-bb23-10aebf2fe997-config\") pod \"route-controller-manager-5f7f4579db-nrfv9\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.639391 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-config\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.642078 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76138317-b692-454f-abaa-35d943d9ccb2-serving-cert\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.642438 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3a2f5f-705d-475d-bb23-10aebf2fe997-serving-cert\") pod \"route-controller-manager-5f7f4579db-nrfv9\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.653059 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxhgn\" (UniqueName: \"kubernetes.io/projected/76138317-b692-454f-abaa-35d943d9ccb2-kube-api-access-fxhgn\") pod \"controller-manager-5747cbd54d-lzxzh\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.655874 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-722xg\" (UniqueName: \"kubernetes.io/projected/8a3a2f5f-705d-475d-bb23-10aebf2fe997-kube-api-access-722xg\") pod \"route-controller-manager-5f7f4579db-nrfv9\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.735348 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.743790 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.922351 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9"] Nov 28 11:11:52 crc kubenswrapper[4772]: W1128 11:11:52.933086 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a3a2f5f_705d_475d_bb23_10aebf2fe997.slice/crio-dbae6beaccc8bafc332539a34d7dd945ca0d5306d73b050e2ae3c43b8b21fcaf WatchSource:0}: Error finding container dbae6beaccc8bafc332539a34d7dd945ca0d5306d73b050e2ae3c43b8b21fcaf: Status 404 returned error can't find the container with id dbae6beaccc8bafc332539a34d7dd945ca0d5306d73b050e2ae3c43b8b21fcaf Nov 28 11:11:52 crc kubenswrapper[4772]: I1128 11:11:52.965941 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-lzxzh"] Nov 28 11:11:52 crc kubenswrapper[4772]: W1128 11:11:52.990969 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76138317_b692_454f_abaa_35d943d9ccb2.slice/crio-16bf2a9b510a0eeb0eab25069ce1bf5e1a795ee6e4a345d6d9e649e4e26a1e21 WatchSource:0}: Error finding container 16bf2a9b510a0eeb0eab25069ce1bf5e1a795ee6e4a345d6d9e649e4e26a1e21: Status 404 returned error can't find the container with id 16bf2a9b510a0eeb0eab25069ce1bf5e1a795ee6e4a345d6d9e649e4e26a1e21 Nov 28 11:11:53 crc kubenswrapper[4772]: I1128 11:11:53.347203 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" event={"ID":"8a3a2f5f-705d-475d-bb23-10aebf2fe997","Type":"ContainerStarted","Data":"19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673"} Nov 28 11:11:53 crc kubenswrapper[4772]: I1128 11:11:53.348218 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:53 crc kubenswrapper[4772]: I1128 11:11:53.348303 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" event={"ID":"8a3a2f5f-705d-475d-bb23-10aebf2fe997","Type":"ContainerStarted","Data":"dbae6beaccc8bafc332539a34d7dd945ca0d5306d73b050e2ae3c43b8b21fcaf"} Nov 28 11:11:53 crc kubenswrapper[4772]: I1128 11:11:53.348505 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" event={"ID":"76138317-b692-454f-abaa-35d943d9ccb2","Type":"ContainerStarted","Data":"8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b"} Nov 28 11:11:53 crc kubenswrapper[4772]: I1128 11:11:53.348550 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" event={"ID":"76138317-b692-454f-abaa-35d943d9ccb2","Type":"ContainerStarted","Data":"16bf2a9b510a0eeb0eab25069ce1bf5e1a795ee6e4a345d6d9e649e4e26a1e21"} Nov 28 11:11:53 crc kubenswrapper[4772]: I1128 11:11:53.348789 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:53 crc kubenswrapper[4772]: I1128 11:11:53.368739 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:11:53 crc kubenswrapper[4772]: I1128 11:11:53.402868 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:11:53 crc kubenswrapper[4772]: I1128 11:11:53.419260 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" podStartSLOduration=2.419242469 podStartE2EDuration="2.419242469s" podCreationTimestamp="2025-11-28 11:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:11:53.419098575 +0000 UTC m=+311.742341812" watchObservedRunningTime="2025-11-28 11:11:53.419242469 +0000 UTC m=+311.742485696" Nov 28 11:11:53 crc kubenswrapper[4772]: I1128 11:11:53.421241 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" podStartSLOduration=2.42123482 podStartE2EDuration="2.42123482s" podCreationTimestamp="2025-11-28 11:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:11:53.387708829 +0000 UTC m=+311.710952056" watchObservedRunningTime="2025-11-28 11:11:53.42123482 +0000 UTC m=+311.744478047" Nov 28 11:11:54 crc kubenswrapper[4772]: I1128 11:11:54.005242 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a94fd25b-ca6a-4afe-922a-61ebfba248ed" path="/var/lib/kubelet/pods/a94fd25b-ca6a-4afe-922a-61ebfba248ed/volumes" Nov 28 11:11:54 crc kubenswrapper[4772]: I1128 11:11:54.006572 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e836396e-bc34-4178-aa3e-94ce5799b2fa" path="/var/lib/kubelet/pods/e836396e-bc34-4178-aa3e-94ce5799b2fa/volumes" Nov 28 11:12:30 crc kubenswrapper[4772]: I1128 11:12:30.905513 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-s8qdx"] Nov 28 11:12:30 crc kubenswrapper[4772]: I1128 11:12:30.906579 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:30 crc kubenswrapper[4772]: I1128 11:12:30.922120 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-s8qdx"] Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.039797 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9"] Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.040279 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" podUID="8a3a2f5f-705d-475d-bb23-10aebf2fe997" containerName="route-controller-manager" containerID="cri-o://19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673" gracePeriod=30 Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.071338 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-registry-certificates\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.071657 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-bound-sa-token\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.071791 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.071889 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-trusted-ca\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.071987 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v6mj\" (UniqueName: \"kubernetes.io/projected/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-kube-api-access-5v6mj\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.072071 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.072152 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.072230 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-registry-tls\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.129728 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.173875 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-bound-sa-token\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.173954 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.173975 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-trusted-ca\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.173991 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v6mj\" (UniqueName: \"kubernetes.io/projected/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-kube-api-access-5v6mj\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.174017 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.174034 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-registry-tls\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.174069 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-registry-certificates\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.175246 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.175671 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-registry-certificates\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.176973 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-trusted-ca\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.184126 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-registry-tls\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.186076 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.189381 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-bound-sa-token\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.217439 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v6mj\" (UniqueName: \"kubernetes.io/projected/f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7-kube-api-access-5v6mj\") pod \"image-registry-66df7c8f76-s8qdx\" (UID: \"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7\") " pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.224100 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.412189 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.580080 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a3a2f5f-705d-475d-bb23-10aebf2fe997-client-ca\") pod \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.580174 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3a2f5f-705d-475d-bb23-10aebf2fe997-config\") pod \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.580248 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3a2f5f-705d-475d-bb23-10aebf2fe997-serving-cert\") pod \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.580274 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-722xg\" (UniqueName: \"kubernetes.io/projected/8a3a2f5f-705d-475d-bb23-10aebf2fe997-kube-api-access-722xg\") pod \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\" (UID: \"8a3a2f5f-705d-475d-bb23-10aebf2fe997\") " Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.581594 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a3a2f5f-705d-475d-bb23-10aebf2fe997-client-ca" (OuterVolumeSpecName: "client-ca") pod "8a3a2f5f-705d-475d-bb23-10aebf2fe997" (UID: "8a3a2f5f-705d-475d-bb23-10aebf2fe997"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.581882 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a3a2f5f-705d-475d-bb23-10aebf2fe997-config" (OuterVolumeSpecName: "config") pod "8a3a2f5f-705d-475d-bb23-10aebf2fe997" (UID: "8a3a2f5f-705d-475d-bb23-10aebf2fe997"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.585344 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a3a2f5f-705d-475d-bb23-10aebf2fe997-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8a3a2f5f-705d-475d-bb23-10aebf2fe997" (UID: "8a3a2f5f-705d-475d-bb23-10aebf2fe997"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.585915 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a3a2f5f-705d-475d-bb23-10aebf2fe997-kube-api-access-722xg" (OuterVolumeSpecName: "kube-api-access-722xg") pod "8a3a2f5f-705d-475d-bb23-10aebf2fe997" (UID: "8a3a2f5f-705d-475d-bb23-10aebf2fe997"). InnerVolumeSpecName "kube-api-access-722xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.590342 4772 generic.go:334] "Generic (PLEG): container finished" podID="8a3a2f5f-705d-475d-bb23-10aebf2fe997" containerID="19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673" exitCode=0 Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.590413 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" event={"ID":"8a3a2f5f-705d-475d-bb23-10aebf2fe997","Type":"ContainerDied","Data":"19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673"} Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.590464 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" event={"ID":"8a3a2f5f-705d-475d-bb23-10aebf2fe997","Type":"ContainerDied","Data":"dbae6beaccc8bafc332539a34d7dd945ca0d5306d73b050e2ae3c43b8b21fcaf"} Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.590434 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.590490 4772 scope.go:117] "RemoveContainer" containerID="19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.616247 4772 scope.go:117] "RemoveContainer" containerID="19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673" Nov 28 11:12:31 crc kubenswrapper[4772]: E1128 11:12:31.616737 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673\": container with ID starting with 19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673 not found: ID does not exist" containerID="19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.616780 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673"} err="failed to get container status \"19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673\": rpc error: code = NotFound desc = could not find container \"19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673\": container with ID starting with 19262c0c5f8b8c95c77ee9261d75ae71785ee640e9390faee515bc0a2600e673 not found: ID does not exist" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.619279 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9"] Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.622271 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f4579db-nrfv9"] Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.665563 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-s8qdx"] Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.681961 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3a2f5f-705d-475d-bb23-10aebf2fe997-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.681996 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-722xg\" (UniqueName: \"kubernetes.io/projected/8a3a2f5f-705d-475d-bb23-10aebf2fe997-kube-api-access-722xg\") on node \"crc\" DevicePath \"\"" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.682008 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a3a2f5f-705d-475d-bb23-10aebf2fe997-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:12:31 crc kubenswrapper[4772]: I1128 11:12:31.682021 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3a2f5f-705d-475d-bb23-10aebf2fe997-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.001160 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a3a2f5f-705d-475d-bb23-10aebf2fe997" path="/var/lib/kubelet/pods/8a3a2f5f-705d-475d-bb23-10aebf2fe997/volumes" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.425244 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6"] Nov 28 11:12:32 crc kubenswrapper[4772]: E1128 11:12:32.433982 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3a2f5f-705d-475d-bb23-10aebf2fe997" containerName="route-controller-manager" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.434081 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3a2f5f-705d-475d-bb23-10aebf2fe997" containerName="route-controller-manager" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.434466 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a3a2f5f-705d-475d-bb23-10aebf2fe997" containerName="route-controller-manager" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.435062 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.436950 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.437029 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.437169 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.441026 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.442179 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.445711 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.446788 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6"] Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.591002 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba90707-83f5-49d1-9a7f-26bc5f30b92c-client-ca\") pod \"route-controller-manager-f9847b9c-krbz6\" (UID: \"eba90707-83f5-49d1-9a7f-26bc5f30b92c\") " pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.591107 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba90707-83f5-49d1-9a7f-26bc5f30b92c-config\") pod \"route-controller-manager-f9847b9c-krbz6\" (UID: \"eba90707-83f5-49d1-9a7f-26bc5f30b92c\") " pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.591136 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba90707-83f5-49d1-9a7f-26bc5f30b92c-serving-cert\") pod \"route-controller-manager-f9847b9c-krbz6\" (UID: \"eba90707-83f5-49d1-9a7f-26bc5f30b92c\") " pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.591161 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7jb7\" (UniqueName: \"kubernetes.io/projected/eba90707-83f5-49d1-9a7f-26bc5f30b92c-kube-api-access-n7jb7\") pod \"route-controller-manager-f9847b9c-krbz6\" (UID: \"eba90707-83f5-49d1-9a7f-26bc5f30b92c\") " pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.595783 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" event={"ID":"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7","Type":"ContainerStarted","Data":"3895963bc309ee1a1c7a10c5e63f919102f18c051e8a24c3ef675c23aeaad61b"} Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.595827 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" event={"ID":"f1f2dbd7-e572-4dcc-8980-9cb08c9c0ac7","Type":"ContainerStarted","Data":"201f1a8932883834db1ea11beed08ce7a49f82bd8f300d82ef4fc9527d04d5ee"} Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.595951 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.614736 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" podStartSLOduration=2.6147178 podStartE2EDuration="2.6147178s" podCreationTimestamp="2025-11-28 11:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:12:32.613519746 +0000 UTC m=+350.936762993" watchObservedRunningTime="2025-11-28 11:12:32.6147178 +0000 UTC m=+350.937961027" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.692262 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba90707-83f5-49d1-9a7f-26bc5f30b92c-config\") pod \"route-controller-manager-f9847b9c-krbz6\" (UID: \"eba90707-83f5-49d1-9a7f-26bc5f30b92c\") " pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.692309 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba90707-83f5-49d1-9a7f-26bc5f30b92c-serving-cert\") pod \"route-controller-manager-f9847b9c-krbz6\" (UID: \"eba90707-83f5-49d1-9a7f-26bc5f30b92c\") " pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.692333 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7jb7\" (UniqueName: \"kubernetes.io/projected/eba90707-83f5-49d1-9a7f-26bc5f30b92c-kube-api-access-n7jb7\") pod \"route-controller-manager-f9847b9c-krbz6\" (UID: \"eba90707-83f5-49d1-9a7f-26bc5f30b92c\") " pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.692380 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba90707-83f5-49d1-9a7f-26bc5f30b92c-client-ca\") pod \"route-controller-manager-f9847b9c-krbz6\" (UID: \"eba90707-83f5-49d1-9a7f-26bc5f30b92c\") " pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.693918 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba90707-83f5-49d1-9a7f-26bc5f30b92c-client-ca\") pod \"route-controller-manager-f9847b9c-krbz6\" (UID: \"eba90707-83f5-49d1-9a7f-26bc5f30b92c\") " pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.694041 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba90707-83f5-49d1-9a7f-26bc5f30b92c-config\") pod \"route-controller-manager-f9847b9c-krbz6\" (UID: \"eba90707-83f5-49d1-9a7f-26bc5f30b92c\") " pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.697768 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba90707-83f5-49d1-9a7f-26bc5f30b92c-serving-cert\") pod \"route-controller-manager-f9847b9c-krbz6\" (UID: \"eba90707-83f5-49d1-9a7f-26bc5f30b92c\") " pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.708440 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7jb7\" (UniqueName: \"kubernetes.io/projected/eba90707-83f5-49d1-9a7f-26bc5f30b92c-kube-api-access-n7jb7\") pod \"route-controller-manager-f9847b9c-krbz6\" (UID: \"eba90707-83f5-49d1-9a7f-26bc5f30b92c\") " pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:32 crc kubenswrapper[4772]: I1128 11:12:32.751221 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:33 crc kubenswrapper[4772]: I1128 11:12:33.129843 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6"] Nov 28 11:12:33 crc kubenswrapper[4772]: W1128 11:12:33.139862 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeba90707_83f5_49d1_9a7f_26bc5f30b92c.slice/crio-fa22e624e11db4e2b7c76c7c573c367a85b3f63a6802c1778322a74da8b0a729 WatchSource:0}: Error finding container fa22e624e11db4e2b7c76c7c573c367a85b3f63a6802c1778322a74da8b0a729: Status 404 returned error can't find the container with id fa22e624e11db4e2b7c76c7c573c367a85b3f63a6802c1778322a74da8b0a729 Nov 28 11:12:33 crc kubenswrapper[4772]: I1128 11:12:33.603259 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" event={"ID":"eba90707-83f5-49d1-9a7f-26bc5f30b92c","Type":"ContainerStarted","Data":"3cb6301d89b48a4a25be02e94a59d0b15f687c12332822f68d043fa1d304bf25"} Nov 28 11:12:33 crc kubenswrapper[4772]: I1128 11:12:33.603614 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" event={"ID":"eba90707-83f5-49d1-9a7f-26bc5f30b92c","Type":"ContainerStarted","Data":"fa22e624e11db4e2b7c76c7c573c367a85b3f63a6802c1778322a74da8b0a729"} Nov 28 11:12:33 crc kubenswrapper[4772]: I1128 11:12:33.619442 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" podStartSLOduration=2.619420596 podStartE2EDuration="2.619420596s" podCreationTimestamp="2025-11-28 11:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:12:33.61849365 +0000 UTC m=+351.941736887" watchObservedRunningTime="2025-11-28 11:12:33.619420596 +0000 UTC m=+351.942663833" Nov 28 11:12:34 crc kubenswrapper[4772]: I1128 11:12:34.608902 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:34 crc kubenswrapper[4772]: I1128 11:12:34.614165 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f9847b9c-krbz6" Nov 28 11:12:40 crc kubenswrapper[4772]: I1128 11:12:40.841556 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zw5p4"] Nov 28 11:12:40 crc kubenswrapper[4772]: I1128 11:12:40.844060 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:40 crc kubenswrapper[4772]: I1128 11:12:40.846485 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 11:12:40 crc kubenswrapper[4772]: I1128 11:12:40.853074 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zw5p4"] Nov 28 11:12:40 crc kubenswrapper[4772]: I1128 11:12:40.995371 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4glgp\" (UniqueName: \"kubernetes.io/projected/37640314-124c-448f-bb98-bc81f7b7ab0f-kube-api-access-4glgp\") pod \"redhat-marketplace-zw5p4\" (UID: \"37640314-124c-448f-bb98-bc81f7b7ab0f\") " pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:40 crc kubenswrapper[4772]: I1128 11:12:40.995479 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37640314-124c-448f-bb98-bc81f7b7ab0f-catalog-content\") pod \"redhat-marketplace-zw5p4\" (UID: \"37640314-124c-448f-bb98-bc81f7b7ab0f\") " pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:40 crc kubenswrapper[4772]: I1128 11:12:40.995510 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37640314-124c-448f-bb98-bc81f7b7ab0f-utilities\") pod \"redhat-marketplace-zw5p4\" (UID: \"37640314-124c-448f-bb98-bc81f7b7ab0f\") " pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.097194 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37640314-124c-448f-bb98-bc81f7b7ab0f-catalog-content\") pod \"redhat-marketplace-zw5p4\" (UID: \"37640314-124c-448f-bb98-bc81f7b7ab0f\") " pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.097581 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37640314-124c-448f-bb98-bc81f7b7ab0f-catalog-content\") pod \"redhat-marketplace-zw5p4\" (UID: \"37640314-124c-448f-bb98-bc81f7b7ab0f\") " pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.097724 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37640314-124c-448f-bb98-bc81f7b7ab0f-utilities\") pod \"redhat-marketplace-zw5p4\" (UID: \"37640314-124c-448f-bb98-bc81f7b7ab0f\") " pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.097957 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37640314-124c-448f-bb98-bc81f7b7ab0f-utilities\") pod \"redhat-marketplace-zw5p4\" (UID: \"37640314-124c-448f-bb98-bc81f7b7ab0f\") " pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.098243 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4glgp\" (UniqueName: \"kubernetes.io/projected/37640314-124c-448f-bb98-bc81f7b7ab0f-kube-api-access-4glgp\") pod \"redhat-marketplace-zw5p4\" (UID: \"37640314-124c-448f-bb98-bc81f7b7ab0f\") " pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.116798 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4glgp\" (UniqueName: \"kubernetes.io/projected/37640314-124c-448f-bb98-bc81f7b7ab0f-kube-api-access-4glgp\") pod \"redhat-marketplace-zw5p4\" (UID: \"37640314-124c-448f-bb98-bc81f7b7ab0f\") " pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.174605 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.433672 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ch7sp"] Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.434960 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.439295 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.443546 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ch7sp"] Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.504764 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26wp\" (UniqueName: \"kubernetes.io/projected/89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932-kube-api-access-r26wp\") pod \"redhat-operators-ch7sp\" (UID: \"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932\") " pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.504837 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932-catalog-content\") pod \"redhat-operators-ch7sp\" (UID: \"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932\") " pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.504870 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932-utilities\") pod \"redhat-operators-ch7sp\" (UID: \"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932\") " pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.606283 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932-utilities\") pod \"redhat-operators-ch7sp\" (UID: \"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932\") " pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.606562 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r26wp\" (UniqueName: \"kubernetes.io/projected/89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932-kube-api-access-r26wp\") pod \"redhat-operators-ch7sp\" (UID: \"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932\") " pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.606771 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932-utilities\") pod \"redhat-operators-ch7sp\" (UID: \"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932\") " pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.606983 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932-catalog-content\") pod \"redhat-operators-ch7sp\" (UID: \"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932\") " pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.607323 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932-catalog-content\") pod \"redhat-operators-ch7sp\" (UID: \"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932\") " pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.609170 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zw5p4"] Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.628028 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r26wp\" (UniqueName: \"kubernetes.io/projected/89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932-kube-api-access-r26wp\") pod \"redhat-operators-ch7sp\" (UID: \"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932\") " pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.641036 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zw5p4" event={"ID":"37640314-124c-448f-bb98-bc81f7b7ab0f","Type":"ContainerStarted","Data":"1ad8e54dffccbc6c4fb7c2e92898cea0d928a803f5dfa2e6112bd85aa3cc8089"} Nov 28 11:12:41 crc kubenswrapper[4772]: I1128 11:12:41.751229 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:42 crc kubenswrapper[4772]: I1128 11:12:42.140053 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ch7sp"] Nov 28 11:12:42 crc kubenswrapper[4772]: W1128 11:12:42.147607 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89b22f99_c5ec_4cfb_9ebc_c2f81c9dc932.slice/crio-03fcbe1665e830c0e88d7e85808b1f3d4234500aff5d103b3a02dd1c394bfb10 WatchSource:0}: Error finding container 03fcbe1665e830c0e88d7e85808b1f3d4234500aff5d103b3a02dd1c394bfb10: Status 404 returned error can't find the container with id 03fcbe1665e830c0e88d7e85808b1f3d4234500aff5d103b3a02dd1c394bfb10 Nov 28 11:12:42 crc kubenswrapper[4772]: I1128 11:12:42.648462 4772 generic.go:334] "Generic (PLEG): container finished" podID="37640314-124c-448f-bb98-bc81f7b7ab0f" containerID="03b0a1cfcf2812142522f570c0cbfe6ee2a8e0167426fd82d6ae10d96de75d29" exitCode=0 Nov 28 11:12:42 crc kubenswrapper[4772]: I1128 11:12:42.648545 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zw5p4" event={"ID":"37640314-124c-448f-bb98-bc81f7b7ab0f","Type":"ContainerDied","Data":"03b0a1cfcf2812142522f570c0cbfe6ee2a8e0167426fd82d6ae10d96de75d29"} Nov 28 11:12:42 crc kubenswrapper[4772]: I1128 11:12:42.650033 4772 generic.go:334] "Generic (PLEG): container finished" podID="89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932" containerID="b9c944a197e39d2290f0ff6b69204aa4bb1077aa6798ce40edcb01828a4eaac0" exitCode=0 Nov 28 11:12:42 crc kubenswrapper[4772]: I1128 11:12:42.650068 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch7sp" event={"ID":"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932","Type":"ContainerDied","Data":"b9c944a197e39d2290f0ff6b69204aa4bb1077aa6798ce40edcb01828a4eaac0"} Nov 28 11:12:42 crc kubenswrapper[4772]: I1128 11:12:42.650088 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch7sp" event={"ID":"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932","Type":"ContainerStarted","Data":"03fcbe1665e830c0e88d7e85808b1f3d4234500aff5d103b3a02dd1c394bfb10"} Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.235145 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-42knr"] Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.236732 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.239117 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.253924 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-42knr"] Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.330972 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5efb9-80ec-409e-9e49-326461bfa739-utilities\") pod \"community-operators-42knr\" (UID: \"42c5efb9-80ec-409e-9e49-326461bfa739\") " pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.331298 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9xcw\" (UniqueName: \"kubernetes.io/projected/42c5efb9-80ec-409e-9e49-326461bfa739-kube-api-access-r9xcw\") pod \"community-operators-42knr\" (UID: \"42c5efb9-80ec-409e-9e49-326461bfa739\") " pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.331341 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5efb9-80ec-409e-9e49-326461bfa739-catalog-content\") pod \"community-operators-42knr\" (UID: \"42c5efb9-80ec-409e-9e49-326461bfa739\") " pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.432569 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5efb9-80ec-409e-9e49-326461bfa739-utilities\") pod \"community-operators-42knr\" (UID: \"42c5efb9-80ec-409e-9e49-326461bfa739\") " pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.432614 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9xcw\" (UniqueName: \"kubernetes.io/projected/42c5efb9-80ec-409e-9e49-326461bfa739-kube-api-access-r9xcw\") pod \"community-operators-42knr\" (UID: \"42c5efb9-80ec-409e-9e49-326461bfa739\") " pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.432654 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5efb9-80ec-409e-9e49-326461bfa739-catalog-content\") pod \"community-operators-42knr\" (UID: \"42c5efb9-80ec-409e-9e49-326461bfa739\") " pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.433174 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5efb9-80ec-409e-9e49-326461bfa739-catalog-content\") pod \"community-operators-42knr\" (UID: \"42c5efb9-80ec-409e-9e49-326461bfa739\") " pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.433232 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5efb9-80ec-409e-9e49-326461bfa739-utilities\") pod \"community-operators-42knr\" (UID: \"42c5efb9-80ec-409e-9e49-326461bfa739\") " pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.456288 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9xcw\" (UniqueName: \"kubernetes.io/projected/42c5efb9-80ec-409e-9e49-326461bfa739-kube-api-access-r9xcw\") pod \"community-operators-42knr\" (UID: \"42c5efb9-80ec-409e-9e49-326461bfa739\") " pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.614678 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.669551 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch7sp" event={"ID":"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932","Type":"ContainerStarted","Data":"098cf668572fc68fa657c7ae0ec0fb52fe5a3c0da86cbdb366acb06264499ae8"} Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.838551 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9h6kq"] Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.840891 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.842615 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.855866 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h6kq"] Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.867534 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-42knr"] Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.939122 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08428ad5-f854-4a72-a10a-bc72715b05a0-utilities\") pod \"certified-operators-9h6kq\" (UID: \"08428ad5-f854-4a72-a10a-bc72715b05a0\") " pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.939222 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08428ad5-f854-4a72-a10a-bc72715b05a0-catalog-content\") pod \"certified-operators-9h6kq\" (UID: \"08428ad5-f854-4a72-a10a-bc72715b05a0\") " pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:43 crc kubenswrapper[4772]: I1128 11:12:43.939247 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwl8z\" (UniqueName: \"kubernetes.io/projected/08428ad5-f854-4a72-a10a-bc72715b05a0-kube-api-access-fwl8z\") pod \"certified-operators-9h6kq\" (UID: \"08428ad5-f854-4a72-a10a-bc72715b05a0\") " pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.040872 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08428ad5-f854-4a72-a10a-bc72715b05a0-catalog-content\") pod \"certified-operators-9h6kq\" (UID: \"08428ad5-f854-4a72-a10a-bc72715b05a0\") " pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.040927 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwl8z\" (UniqueName: \"kubernetes.io/projected/08428ad5-f854-4a72-a10a-bc72715b05a0-kube-api-access-fwl8z\") pod \"certified-operators-9h6kq\" (UID: \"08428ad5-f854-4a72-a10a-bc72715b05a0\") " pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.040952 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08428ad5-f854-4a72-a10a-bc72715b05a0-utilities\") pod \"certified-operators-9h6kq\" (UID: \"08428ad5-f854-4a72-a10a-bc72715b05a0\") " pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.041420 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08428ad5-f854-4a72-a10a-bc72715b05a0-catalog-content\") pod \"certified-operators-9h6kq\" (UID: \"08428ad5-f854-4a72-a10a-bc72715b05a0\") " pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.041485 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08428ad5-f854-4a72-a10a-bc72715b05a0-utilities\") pod \"certified-operators-9h6kq\" (UID: \"08428ad5-f854-4a72-a10a-bc72715b05a0\") " pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.061277 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwl8z\" (UniqueName: \"kubernetes.io/projected/08428ad5-f854-4a72-a10a-bc72715b05a0-kube-api-access-fwl8z\") pod \"certified-operators-9h6kq\" (UID: \"08428ad5-f854-4a72-a10a-bc72715b05a0\") " pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.157783 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.364997 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h6kq"] Nov 28 11:12:44 crc kubenswrapper[4772]: W1128 11:12:44.375565 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08428ad5_f854_4a72_a10a_bc72715b05a0.slice/crio-c13fba703413fdc199323d38a844cc81dc783bce4d2366e3d81acdaf779c9b05 WatchSource:0}: Error finding container c13fba703413fdc199323d38a844cc81dc783bce4d2366e3d81acdaf779c9b05: Status 404 returned error can't find the container with id c13fba703413fdc199323d38a844cc81dc783bce4d2366e3d81acdaf779c9b05 Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.676873 4772 generic.go:334] "Generic (PLEG): container finished" podID="42c5efb9-80ec-409e-9e49-326461bfa739" containerID="e2d836b3bd79c804b3d088dc0de0c0ece534a33cfc0058ccf88d21f923d46926" exitCode=0 Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.676936 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42knr" event={"ID":"42c5efb9-80ec-409e-9e49-326461bfa739","Type":"ContainerDied","Data":"e2d836b3bd79c804b3d088dc0de0c0ece534a33cfc0058ccf88d21f923d46926"} Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.676964 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42knr" event={"ID":"42c5efb9-80ec-409e-9e49-326461bfa739","Type":"ContainerStarted","Data":"9e04762c3e115cb0f51030c2a4973645ce6dbea520aa88d614ede141f5531225"} Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.678847 4772 generic.go:334] "Generic (PLEG): container finished" podID="89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932" containerID="098cf668572fc68fa657c7ae0ec0fb52fe5a3c0da86cbdb366acb06264499ae8" exitCode=0 Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.678935 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch7sp" event={"ID":"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932","Type":"ContainerDied","Data":"098cf668572fc68fa657c7ae0ec0fb52fe5a3c0da86cbdb366acb06264499ae8"} Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.680307 4772 generic.go:334] "Generic (PLEG): container finished" podID="08428ad5-f854-4a72-a10a-bc72715b05a0" containerID="b58677cf31cf84e6360cc288d774fda1b7e2be8ce342202772e22d219719a187" exitCode=0 Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.680409 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h6kq" event={"ID":"08428ad5-f854-4a72-a10a-bc72715b05a0","Type":"ContainerDied","Data":"b58677cf31cf84e6360cc288d774fda1b7e2be8ce342202772e22d219719a187"} Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.680442 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h6kq" event={"ID":"08428ad5-f854-4a72-a10a-bc72715b05a0","Type":"ContainerStarted","Data":"c13fba703413fdc199323d38a844cc81dc783bce4d2366e3d81acdaf779c9b05"} Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.684565 4772 generic.go:334] "Generic (PLEG): container finished" podID="37640314-124c-448f-bb98-bc81f7b7ab0f" containerID="5ea0ce38385ba5b17a5a55c4d2d66b73907e5617a464f19605bebd97553b11e8" exitCode=0 Nov 28 11:12:44 crc kubenswrapper[4772]: I1128 11:12:44.684601 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zw5p4" event={"ID":"37640314-124c-448f-bb98-bc81f7b7ab0f","Type":"ContainerDied","Data":"5ea0ce38385ba5b17a5a55c4d2d66b73907e5617a464f19605bebd97553b11e8"} Nov 28 11:12:45 crc kubenswrapper[4772]: I1128 11:12:45.691049 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42knr" event={"ID":"42c5efb9-80ec-409e-9e49-326461bfa739","Type":"ContainerStarted","Data":"d8d62f5144cb5e2c61436279a6fa8cc7a39611640950173f696cd5baab6c7452"} Nov 28 11:12:45 crc kubenswrapper[4772]: I1128 11:12:45.694831 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch7sp" event={"ID":"89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932","Type":"ContainerStarted","Data":"d38f8c07a0a0ad3ce0a85a4c797856cdfd1dbd0d90dcf0ff18558f426713ca29"} Nov 28 11:12:45 crc kubenswrapper[4772]: I1128 11:12:45.697233 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zw5p4" event={"ID":"37640314-124c-448f-bb98-bc81f7b7ab0f","Type":"ContainerStarted","Data":"6bb7ecae4fca62753404f0842e15520546a04b4de24166429d5096b14a7831eb"} Nov 28 11:12:45 crc kubenswrapper[4772]: I1128 11:12:45.735981 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zw5p4" podStartSLOduration=3.213618175 podStartE2EDuration="5.735960583s" podCreationTimestamp="2025-11-28 11:12:40 +0000 UTC" firstStartedPulling="2025-11-28 11:12:42.652232186 +0000 UTC m=+360.975475413" lastFinishedPulling="2025-11-28 11:12:45.174574594 +0000 UTC m=+363.497817821" observedRunningTime="2025-11-28 11:12:45.732700742 +0000 UTC m=+364.055943969" watchObservedRunningTime="2025-11-28 11:12:45.735960583 +0000 UTC m=+364.059203810" Nov 28 11:12:45 crc kubenswrapper[4772]: I1128 11:12:45.751990 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ch7sp" podStartSLOduration=2.144046408 podStartE2EDuration="4.751968826s" podCreationTimestamp="2025-11-28 11:12:41 +0000 UTC" firstStartedPulling="2025-11-28 11:12:42.651412634 +0000 UTC m=+360.974655861" lastFinishedPulling="2025-11-28 11:12:45.259335052 +0000 UTC m=+363.582578279" observedRunningTime="2025-11-28 11:12:45.749532178 +0000 UTC m=+364.072775405" watchObservedRunningTime="2025-11-28 11:12:45.751968826 +0000 UTC m=+364.075212053" Nov 28 11:12:46 crc kubenswrapper[4772]: I1128 11:12:46.703489 4772 generic.go:334] "Generic (PLEG): container finished" podID="42c5efb9-80ec-409e-9e49-326461bfa739" containerID="d8d62f5144cb5e2c61436279a6fa8cc7a39611640950173f696cd5baab6c7452" exitCode=0 Nov 28 11:12:46 crc kubenswrapper[4772]: I1128 11:12:46.703555 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42knr" event={"ID":"42c5efb9-80ec-409e-9e49-326461bfa739","Type":"ContainerDied","Data":"d8d62f5144cb5e2c61436279a6fa8cc7a39611640950173f696cd5baab6c7452"} Nov 28 11:12:46 crc kubenswrapper[4772]: I1128 11:12:46.706422 4772 generic.go:334] "Generic (PLEG): container finished" podID="08428ad5-f854-4a72-a10a-bc72715b05a0" containerID="3b6405cce4db4e750f31bcdf91b2409ea71707c3d2e6845c4977e14aa338d827" exitCode=0 Nov 28 11:12:46 crc kubenswrapper[4772]: I1128 11:12:46.706470 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h6kq" event={"ID":"08428ad5-f854-4a72-a10a-bc72715b05a0","Type":"ContainerDied","Data":"3b6405cce4db4e750f31bcdf91b2409ea71707c3d2e6845c4977e14aa338d827"} Nov 28 11:12:47 crc kubenswrapper[4772]: I1128 11:12:47.715299 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42knr" event={"ID":"42c5efb9-80ec-409e-9e49-326461bfa739","Type":"ContainerStarted","Data":"b772ae8d4908eeedf1cf04026fdade9792cdd030d6b87901ce3dc1f26978166d"} Nov 28 11:12:47 crc kubenswrapper[4772]: I1128 11:12:47.718297 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h6kq" event={"ID":"08428ad5-f854-4a72-a10a-bc72715b05a0","Type":"ContainerStarted","Data":"cf862882a5605ff2ec5fb5afc47e8cb5f693637b29bb3cbad626774e1fabc05b"} Nov 28 11:12:47 crc kubenswrapper[4772]: I1128 11:12:47.739089 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-42knr" podStartSLOduration=2.179469371 podStartE2EDuration="4.739072481s" podCreationTimestamp="2025-11-28 11:12:43 +0000 UTC" firstStartedPulling="2025-11-28 11:12:44.678714402 +0000 UTC m=+363.001957629" lastFinishedPulling="2025-11-28 11:12:47.238317502 +0000 UTC m=+365.561560739" observedRunningTime="2025-11-28 11:12:47.731904122 +0000 UTC m=+366.055147349" watchObservedRunningTime="2025-11-28 11:12:47.739072481 +0000 UTC m=+366.062315708" Nov 28 11:12:47 crc kubenswrapper[4772]: I1128 11:12:47.757101 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9h6kq" podStartSLOduration=2.247988607 podStartE2EDuration="4.757081369s" podCreationTimestamp="2025-11-28 11:12:43 +0000 UTC" firstStartedPulling="2025-11-28 11:12:44.682711552 +0000 UTC m=+363.005954789" lastFinishedPulling="2025-11-28 11:12:47.191804324 +0000 UTC m=+365.515047551" observedRunningTime="2025-11-28 11:12:47.756158854 +0000 UTC m=+366.079402071" watchObservedRunningTime="2025-11-28 11:12:47.757081369 +0000 UTC m=+366.080324596" Nov 28 11:12:51 crc kubenswrapper[4772]: I1128 11:12:51.175737 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:51 crc kubenswrapper[4772]: I1128 11:12:51.176096 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:51 crc kubenswrapper[4772]: I1128 11:12:51.211725 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:51 crc kubenswrapper[4772]: I1128 11:12:51.229659 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-s8qdx" Nov 28 11:12:51 crc kubenswrapper[4772]: I1128 11:12:51.281868 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pgb48"] Nov 28 11:12:51 crc kubenswrapper[4772]: I1128 11:12:51.752519 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:51 crc kubenswrapper[4772]: I1128 11:12:51.752577 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:51 crc kubenswrapper[4772]: I1128 11:12:51.788705 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:51 crc kubenswrapper[4772]: I1128 11:12:51.789766 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zw5p4" Nov 28 11:12:52 crc kubenswrapper[4772]: I1128 11:12:52.800132 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ch7sp" Nov 28 11:12:53 crc kubenswrapper[4772]: I1128 11:12:53.615152 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:53 crc kubenswrapper[4772]: I1128 11:12:53.615515 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:53 crc kubenswrapper[4772]: I1128 11:12:53.651917 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:53 crc kubenswrapper[4772]: I1128 11:12:53.798965 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-42knr" Nov 28 11:12:53 crc kubenswrapper[4772]: I1128 11:12:53.896618 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:12:53 crc kubenswrapper[4772]: I1128 11:12:53.897515 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:12:54 crc kubenswrapper[4772]: I1128 11:12:54.157984 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:54 crc kubenswrapper[4772]: I1128 11:12:54.158042 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:54 crc kubenswrapper[4772]: I1128 11:12:54.196630 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:12:54 crc kubenswrapper[4772]: I1128 11:12:54.801839 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9h6kq" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.030750 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-lzxzh"] Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.031391 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" podUID="76138317-b692-454f-abaa-35d943d9ccb2" containerName="controller-manager" containerID="cri-o://8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b" gracePeriod=30 Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.407425 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.589520 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-config\") pod \"76138317-b692-454f-abaa-35d943d9ccb2\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.589937 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxhgn\" (UniqueName: \"kubernetes.io/projected/76138317-b692-454f-abaa-35d943d9ccb2-kube-api-access-fxhgn\") pod \"76138317-b692-454f-abaa-35d943d9ccb2\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.589957 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-client-ca\") pod \"76138317-b692-454f-abaa-35d943d9ccb2\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.590035 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-proxy-ca-bundles\") pod \"76138317-b692-454f-abaa-35d943d9ccb2\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.590053 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76138317-b692-454f-abaa-35d943d9ccb2-serving-cert\") pod \"76138317-b692-454f-abaa-35d943d9ccb2\" (UID: \"76138317-b692-454f-abaa-35d943d9ccb2\") " Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.590805 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "76138317-b692-454f-abaa-35d943d9ccb2" (UID: "76138317-b692-454f-abaa-35d943d9ccb2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.590855 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-client-ca" (OuterVolumeSpecName: "client-ca") pod "76138317-b692-454f-abaa-35d943d9ccb2" (UID: "76138317-b692-454f-abaa-35d943d9ccb2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.590931 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-config" (OuterVolumeSpecName: "config") pod "76138317-b692-454f-abaa-35d943d9ccb2" (UID: "76138317-b692-454f-abaa-35d943d9ccb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.591047 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.591069 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.594795 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76138317-b692-454f-abaa-35d943d9ccb2-kube-api-access-fxhgn" (OuterVolumeSpecName: "kube-api-access-fxhgn") pod "76138317-b692-454f-abaa-35d943d9ccb2" (UID: "76138317-b692-454f-abaa-35d943d9ccb2"). InnerVolumeSpecName "kube-api-access-fxhgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.595239 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76138317-b692-454f-abaa-35d943d9ccb2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "76138317-b692-454f-abaa-35d943d9ccb2" (UID: "76138317-b692-454f-abaa-35d943d9ccb2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.692237 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76138317-b692-454f-abaa-35d943d9ccb2-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.692275 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxhgn\" (UniqueName: \"kubernetes.io/projected/76138317-b692-454f-abaa-35d943d9ccb2-kube-api-access-fxhgn\") on node \"crc\" DevicePath \"\"" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.692286 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76138317-b692-454f-abaa-35d943d9ccb2-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.863586 4772 generic.go:334] "Generic (PLEG): container finished" podID="76138317-b692-454f-abaa-35d943d9ccb2" containerID="8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b" exitCode=0 Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.863632 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" event={"ID":"76138317-b692-454f-abaa-35d943d9ccb2","Type":"ContainerDied","Data":"8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b"} Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.863657 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" event={"ID":"76138317-b692-454f-abaa-35d943d9ccb2","Type":"ContainerDied","Data":"16bf2a9b510a0eeb0eab25069ce1bf5e1a795ee6e4a345d6d9e649e4e26a1e21"} Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.863675 4772 scope.go:117] "RemoveContainer" containerID="8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b" Nov 28 11:13:11 crc kubenswrapper[4772]: I1128 11:13:11.863773 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.459423 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x"] Nov 28 11:13:12 crc kubenswrapper[4772]: E1128 11:13:12.460890 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76138317-b692-454f-abaa-35d943d9ccb2" containerName="controller-manager" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.460965 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="76138317-b692-454f-abaa-35d943d9ccb2" containerName="controller-manager" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.461505 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="76138317-b692-454f-abaa-35d943d9ccb2" containerName="controller-manager" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.462477 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.466656 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.467055 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.467767 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.467857 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.469033 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.476606 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.477610 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.487556 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x"] Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.504180 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/003621fa-cfdc-43c3-8da4-251df06eb45f-serving-cert\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.504238 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/003621fa-cfdc-43c3-8da4-251df06eb45f-client-ca\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.504261 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003621fa-cfdc-43c3-8da4-251df06eb45f-config\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.504311 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/003621fa-cfdc-43c3-8da4-251df06eb45f-proxy-ca-bundles\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.504349 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz59c\" (UniqueName: \"kubernetes.io/projected/003621fa-cfdc-43c3-8da4-251df06eb45f-kube-api-access-zz59c\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.605518 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/003621fa-cfdc-43c3-8da4-251df06eb45f-serving-cert\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.605732 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/003621fa-cfdc-43c3-8da4-251df06eb45f-client-ca\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.605805 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003621fa-cfdc-43c3-8da4-251df06eb45f-config\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.605873 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/003621fa-cfdc-43c3-8da4-251df06eb45f-proxy-ca-bundles\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.605980 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz59c\" (UniqueName: \"kubernetes.io/projected/003621fa-cfdc-43c3-8da4-251df06eb45f-kube-api-access-zz59c\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.606742 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/003621fa-cfdc-43c3-8da4-251df06eb45f-client-ca\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.607457 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003621fa-cfdc-43c3-8da4-251df06eb45f-config\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.618569 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/003621fa-cfdc-43c3-8da4-251df06eb45f-proxy-ca-bundles\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.623012 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/003621fa-cfdc-43c3-8da4-251df06eb45f-serving-cert\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.623093 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz59c\" (UniqueName: \"kubernetes.io/projected/003621fa-cfdc-43c3-8da4-251df06eb45f-kube-api-access-zz59c\") pod \"controller-manager-5896c5f7cc-fhm9x\" (UID: \"003621fa-cfdc-43c3-8da4-251df06eb45f\") " pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.786698 4772 scope.go:117] "RemoveContainer" containerID="8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b" Nov 28 11:13:12 crc kubenswrapper[4772]: E1128 11:13:12.787384 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b\": container with ID starting with 8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b not found: ID does not exist" containerID="8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.787441 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b"} err="failed to get container status \"8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b\": rpc error: code = NotFound desc = could not find container \"8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b\": container with ID starting with 8ffe9c7ed9b7998a3b84b973198e5589f89259c7c9a0bb8d0ae867547268f66b not found: ID does not exist" Nov 28 11:13:12 crc kubenswrapper[4772]: I1128 11:13:12.811522 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:13 crc kubenswrapper[4772]: I1128 11:13:13.192507 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x"] Nov 28 11:13:13 crc kubenswrapper[4772]: I1128 11:13:13.876814 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" event={"ID":"003621fa-cfdc-43c3-8da4-251df06eb45f","Type":"ContainerStarted","Data":"6140d360778af4fa8c6467c7d4ceb83494dfa69ec72a5283cb32726e0d5f89da"} Nov 28 11:13:13 crc kubenswrapper[4772]: I1128 11:13:13.877169 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:13 crc kubenswrapper[4772]: I1128 11:13:13.877185 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" event={"ID":"003621fa-cfdc-43c3-8da4-251df06eb45f","Type":"ContainerStarted","Data":"0b0669e74f1a6df3197178446ed0389bc23d37d786e9a299d50c2333a6db2ec5"} Nov 28 11:13:13 crc kubenswrapper[4772]: I1128 11:13:13.881776 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" Nov 28 11:13:13 crc kubenswrapper[4772]: I1128 11:13:13.900397 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5896c5f7cc-fhm9x" podStartSLOduration=2.90037629 podStartE2EDuration="2.90037629s" podCreationTimestamp="2025-11-28 11:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:13:13.894508728 +0000 UTC m=+392.217751965" watchObservedRunningTime="2025-11-28 11:13:13.90037629 +0000 UTC m=+392.223619507" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.331316 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" podUID="01678d74-64f0-4bee-b900-6dd92b577842" containerName="registry" containerID="cri-o://32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3" gracePeriod=30 Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.729051 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.866496 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-bound-sa-token\") pod \"01678d74-64f0-4bee-b900-6dd92b577842\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.866556 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01678d74-64f0-4bee-b900-6dd92b577842-trusted-ca\") pod \"01678d74-64f0-4bee-b900-6dd92b577842\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.866785 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"01678d74-64f0-4bee-b900-6dd92b577842\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.866833 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-registry-tls\") pod \"01678d74-64f0-4bee-b900-6dd92b577842\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.866870 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5tbn\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-kube-api-access-p5tbn\") pod \"01678d74-64f0-4bee-b900-6dd92b577842\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.866945 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01678d74-64f0-4bee-b900-6dd92b577842-registry-certificates\") pod \"01678d74-64f0-4bee-b900-6dd92b577842\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.866984 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01678d74-64f0-4bee-b900-6dd92b577842-installation-pull-secrets\") pod \"01678d74-64f0-4bee-b900-6dd92b577842\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.867026 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01678d74-64f0-4bee-b900-6dd92b577842-ca-trust-extracted\") pod \"01678d74-64f0-4bee-b900-6dd92b577842\" (UID: \"01678d74-64f0-4bee-b900-6dd92b577842\") " Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.868526 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01678d74-64f0-4bee-b900-6dd92b577842-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "01678d74-64f0-4bee-b900-6dd92b577842" (UID: "01678d74-64f0-4bee-b900-6dd92b577842"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.868695 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01678d74-64f0-4bee-b900-6dd92b577842-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "01678d74-64f0-4bee-b900-6dd92b577842" (UID: "01678d74-64f0-4bee-b900-6dd92b577842"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.876507 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-kube-api-access-p5tbn" (OuterVolumeSpecName: "kube-api-access-p5tbn") pod "01678d74-64f0-4bee-b900-6dd92b577842" (UID: "01678d74-64f0-4bee-b900-6dd92b577842"). InnerVolumeSpecName "kube-api-access-p5tbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.876647 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "01678d74-64f0-4bee-b900-6dd92b577842" (UID: "01678d74-64f0-4bee-b900-6dd92b577842"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.879083 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "01678d74-64f0-4bee-b900-6dd92b577842" (UID: "01678d74-64f0-4bee-b900-6dd92b577842"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.879297 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01678d74-64f0-4bee-b900-6dd92b577842-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "01678d74-64f0-4bee-b900-6dd92b577842" (UID: "01678d74-64f0-4bee-b900-6dd92b577842"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.886108 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01678d74-64f0-4bee-b900-6dd92b577842-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "01678d74-64f0-4bee-b900-6dd92b577842" (UID: "01678d74-64f0-4bee-b900-6dd92b577842"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.886408 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "01678d74-64f0-4bee-b900-6dd92b577842" (UID: "01678d74-64f0-4bee-b900-6dd92b577842"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.900465 4772 generic.go:334] "Generic (PLEG): container finished" podID="01678d74-64f0-4bee-b900-6dd92b577842" containerID="32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3" exitCode=0 Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.900534 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" event={"ID":"01678d74-64f0-4bee-b900-6dd92b577842","Type":"ContainerDied","Data":"32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3"} Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.900574 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.900590 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pgb48" event={"ID":"01678d74-64f0-4bee-b900-6dd92b577842","Type":"ContainerDied","Data":"a0ee6e411006de20c74f36ba49954caf9b126a7e373a9bb6752c61daab5be368"} Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.900625 4772 scope.go:117] "RemoveContainer" containerID="32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.922721 4772 scope.go:117] "RemoveContainer" containerID="32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3" Nov 28 11:13:16 crc kubenswrapper[4772]: E1128 11:13:16.923197 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3\": container with ID starting with 32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3 not found: ID does not exist" containerID="32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.923242 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3"} err="failed to get container status \"32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3\": rpc error: code = NotFound desc = could not find container \"32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3\": container with ID starting with 32f8cdb7c5e429280cf8323613aaa6fdc07dc6a0b3dfa211c8e4b03a3554a3a3 not found: ID does not exist" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.941818 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pgb48"] Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.944272 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pgb48"] Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.968233 4772 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01678d74-64f0-4bee-b900-6dd92b577842-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.968282 4772 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01678d74-64f0-4bee-b900-6dd92b577842-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.968293 4772 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01678d74-64f0-4bee-b900-6dd92b577842-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.968304 4772 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.968313 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01678d74-64f0-4bee-b900-6dd92b577842-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.968323 4772 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:13:16 crc kubenswrapper[4772]: I1128 11:13:16.968331 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5tbn\" (UniqueName: \"kubernetes.io/projected/01678d74-64f0-4bee-b900-6dd92b577842-kube-api-access-p5tbn\") on node \"crc\" DevicePath \"\"" Nov 28 11:13:18 crc kubenswrapper[4772]: I1128 11:13:18.003971 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01678d74-64f0-4bee-b900-6dd92b577842" path="/var/lib/kubelet/pods/01678d74-64f0-4bee-b900-6dd92b577842/volumes" Nov 28 11:13:23 crc kubenswrapper[4772]: I1128 11:13:23.896436 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:13:23 crc kubenswrapper[4772]: I1128 11:13:23.897037 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:13:42 crc kubenswrapper[4772]: I1128 11:13:42.779734 4772 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod76138317-b692-454f-abaa-35d943d9ccb2"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod76138317-b692-454f-abaa-35d943d9ccb2] : Timed out while waiting for systemd to remove kubepods-burstable-pod76138317_b692_454f_abaa_35d943d9ccb2.slice" Nov 28 11:13:42 crc kubenswrapper[4772]: E1128 11:13:42.780656 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod76138317-b692-454f-abaa-35d943d9ccb2] : unable to destroy cgroup paths for cgroup [kubepods burstable pod76138317-b692-454f-abaa-35d943d9ccb2] : Timed out while waiting for systemd to remove kubepods-burstable-pod76138317_b692_454f_abaa_35d943d9ccb2.slice" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" podUID="76138317-b692-454f-abaa-35d943d9ccb2" Nov 28 11:13:43 crc kubenswrapper[4772]: I1128 11:13:43.042026 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-lzxzh" Nov 28 11:13:43 crc kubenswrapper[4772]: I1128 11:13:43.082270 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-lzxzh"] Nov 28 11:13:43 crc kubenswrapper[4772]: I1128 11:13:43.085249 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-lzxzh"] Nov 28 11:13:44 crc kubenswrapper[4772]: I1128 11:13:44.006380 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76138317-b692-454f-abaa-35d943d9ccb2" path="/var/lib/kubelet/pods/76138317-b692-454f-abaa-35d943d9ccb2/volumes" Nov 28 11:13:53 crc kubenswrapper[4772]: I1128 11:13:53.896569 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:13:53 crc kubenswrapper[4772]: I1128 11:13:53.897286 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:13:53 crc kubenswrapper[4772]: I1128 11:13:53.897352 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:13:53 crc kubenswrapper[4772]: I1128 11:13:53.898174 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"beef7ccb3e0e2e5ae83a32f266ec8c15aa9fff63861b33defb11415294193bf3"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:13:53 crc kubenswrapper[4772]: I1128 11:13:53.898296 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://beef7ccb3e0e2e5ae83a32f266ec8c15aa9fff63861b33defb11415294193bf3" gracePeriod=600 Nov 28 11:13:55 crc kubenswrapper[4772]: I1128 11:13:55.109519 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="beef7ccb3e0e2e5ae83a32f266ec8c15aa9fff63861b33defb11415294193bf3" exitCode=0 Nov 28 11:13:55 crc kubenswrapper[4772]: I1128 11:13:55.109641 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"beef7ccb3e0e2e5ae83a32f266ec8c15aa9fff63861b33defb11415294193bf3"} Nov 28 11:13:55 crc kubenswrapper[4772]: I1128 11:13:55.109933 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"719ebb3dbeb04504957f753c3982248f2e3853f40081e16785cf8530808c4dd9"} Nov 28 11:13:55 crc kubenswrapper[4772]: I1128 11:13:55.109963 4772 scope.go:117] "RemoveContainer" containerID="6a21ed9e03bf61abbda8ac2c345c75a38f26d9cd6b78a0e7f2771bc8d6a9963a" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.179887 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf"] Nov 28 11:15:00 crc kubenswrapper[4772]: E1128 11:15:00.180991 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01678d74-64f0-4bee-b900-6dd92b577842" containerName="registry" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.181016 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="01678d74-64f0-4bee-b900-6dd92b577842" containerName="registry" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.181197 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="01678d74-64f0-4bee-b900-6dd92b577842" containerName="registry" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.182153 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.184320 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.184608 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.189893 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf"] Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.278416 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cn9x\" (UniqueName: \"kubernetes.io/projected/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-kube-api-access-2cn9x\") pod \"collect-profiles-29405475-4lxmf\" (UID: \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.278711 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-secret-volume\") pod \"collect-profiles-29405475-4lxmf\" (UID: \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.278785 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-config-volume\") pod \"collect-profiles-29405475-4lxmf\" (UID: \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.380128 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-config-volume\") pod \"collect-profiles-29405475-4lxmf\" (UID: \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.380406 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cn9x\" (UniqueName: \"kubernetes.io/projected/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-kube-api-access-2cn9x\") pod \"collect-profiles-29405475-4lxmf\" (UID: \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.380504 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-secret-volume\") pod \"collect-profiles-29405475-4lxmf\" (UID: \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.383024 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-config-volume\") pod \"collect-profiles-29405475-4lxmf\" (UID: \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.385662 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-secret-volume\") pod \"collect-profiles-29405475-4lxmf\" (UID: \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.394408 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cn9x\" (UniqueName: \"kubernetes.io/projected/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-kube-api-access-2cn9x\") pod \"collect-profiles-29405475-4lxmf\" (UID: \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.540391 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:00 crc kubenswrapper[4772]: I1128 11:15:00.920752 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf"] Nov 28 11:15:01 crc kubenswrapper[4772]: I1128 11:15:01.546720 4772 generic.go:334] "Generic (PLEG): container finished" podID="b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c" containerID="d2762c6278fe8cc95f7fd3e48327ab0d03aed28e82b53cfeea687f8d1ba35b62" exitCode=0 Nov 28 11:15:01 crc kubenswrapper[4772]: I1128 11:15:01.546800 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" event={"ID":"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c","Type":"ContainerDied","Data":"d2762c6278fe8cc95f7fd3e48327ab0d03aed28e82b53cfeea687f8d1ba35b62"} Nov 28 11:15:01 crc kubenswrapper[4772]: I1128 11:15:01.547020 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" event={"ID":"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c","Type":"ContainerStarted","Data":"28c70bc0b1d1e3b61aba609b5a7b53d0876981019d8f801d57854fad3c6187ac"} Nov 28 11:15:02 crc kubenswrapper[4772]: I1128 11:15:02.770825 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:02 crc kubenswrapper[4772]: I1128 11:15:02.909114 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-secret-volume\") pod \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\" (UID: \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\") " Nov 28 11:15:02 crc kubenswrapper[4772]: I1128 11:15:02.909173 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-config-volume\") pod \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\" (UID: \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\") " Nov 28 11:15:02 crc kubenswrapper[4772]: I1128 11:15:02.909308 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cn9x\" (UniqueName: \"kubernetes.io/projected/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-kube-api-access-2cn9x\") pod \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\" (UID: \"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c\") " Nov 28 11:15:02 crc kubenswrapper[4772]: I1128 11:15:02.910152 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-config-volume" (OuterVolumeSpecName: "config-volume") pod "b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c" (UID: "b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:15:02 crc kubenswrapper[4772]: I1128 11:15:02.914729 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c" (UID: "b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:15:02 crc kubenswrapper[4772]: I1128 11:15:02.914842 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-kube-api-access-2cn9x" (OuterVolumeSpecName: "kube-api-access-2cn9x") pod "b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c" (UID: "b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c"). InnerVolumeSpecName "kube-api-access-2cn9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:15:03 crc kubenswrapper[4772]: I1128 11:15:03.010823 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cn9x\" (UniqueName: \"kubernetes.io/projected/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-kube-api-access-2cn9x\") on node \"crc\" DevicePath \"\"" Nov 28 11:15:03 crc kubenswrapper[4772]: I1128 11:15:03.010854 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 11:15:03 crc kubenswrapper[4772]: I1128 11:15:03.010887 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 11:15:03 crc kubenswrapper[4772]: I1128 11:15:03.559625 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" event={"ID":"b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c","Type":"ContainerDied","Data":"28c70bc0b1d1e3b61aba609b5a7b53d0876981019d8f801d57854fad3c6187ac"} Nov 28 11:15:03 crc kubenswrapper[4772]: I1128 11:15:03.559681 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c70bc0b1d1e3b61aba609b5a7b53d0876981019d8f801d57854fad3c6187ac" Nov 28 11:15:03 crc kubenswrapper[4772]: I1128 11:15:03.559758 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf" Nov 28 11:15:42 crc kubenswrapper[4772]: I1128 11:15:42.206767 4772 scope.go:117] "RemoveContainer" containerID="8f4fde0848c43a02f27937326e517fb2bb47704141488165aec07bd3294816ff" Nov 28 11:16:23 crc kubenswrapper[4772]: I1128 11:16:23.896656 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:16:23 crc kubenswrapper[4772]: I1128 11:16:23.897248 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:16:42 crc kubenswrapper[4772]: I1128 11:16:42.244480 4772 scope.go:117] "RemoveContainer" containerID="f30c98a8507ababbde14b625bf0dc966e5bf3c63ffd63a961bf82fe3003de637" Nov 28 11:16:42 crc kubenswrapper[4772]: I1128 11:16:42.258299 4772 scope.go:117] "RemoveContainer" containerID="4fb5a2cf67871f9db51e7f6f6bc8185e0d7ab36580c930084aa18dd8364cad8f" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.282548 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-tmmls"] Nov 28 11:16:50 crc kubenswrapper[4772]: E1128 11:16:50.283310 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c" containerName="collect-profiles" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.283321 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c" containerName="collect-profiles" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.283427 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c" containerName="collect-profiles" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.283755 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-tmmls" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.285293 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.285491 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.288400 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-tmmls"] Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.288588 4772 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-qk4kk" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.294505 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hxhpx"] Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.295084 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-hxhpx" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.298249 4772 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-frqvm" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.309524 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hxhpx"] Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.318962 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-cq45f"] Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.319572 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-cq45f" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.337445 4772 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wndhr" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.355716 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-cq45f"] Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.480350 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58xjv\" (UniqueName: \"kubernetes.io/projected/e3fe3585-4df1-4fdc-ab0f-7c9c4ed0e6de-kube-api-access-58xjv\") pod \"cert-manager-webhook-5655c58dd6-cq45f\" (UID: \"e3fe3585-4df1-4fdc-ab0f-7c9c4ed0e6de\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-cq45f" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.480615 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2w9k\" (UniqueName: \"kubernetes.io/projected/ee885bfb-fd81-472f-8de7-2a64130e0141-kube-api-access-g2w9k\") pod \"cert-manager-cainjector-7f985d654d-tmmls\" (UID: \"ee885bfb-fd81-472f-8de7-2a64130e0141\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-tmmls" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.480659 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh7cq\" (UniqueName: \"kubernetes.io/projected/9aab53d9-b682-4d53-8d5e-8fd0498411e6-kube-api-access-hh7cq\") pod \"cert-manager-5b446d88c5-hxhpx\" (UID: \"9aab53d9-b682-4d53-8d5e-8fd0498411e6\") " pod="cert-manager/cert-manager-5b446d88c5-hxhpx" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.582513 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2w9k\" (UniqueName: \"kubernetes.io/projected/ee885bfb-fd81-472f-8de7-2a64130e0141-kube-api-access-g2w9k\") pod \"cert-manager-cainjector-7f985d654d-tmmls\" (UID: \"ee885bfb-fd81-472f-8de7-2a64130e0141\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-tmmls" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.582677 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh7cq\" (UniqueName: \"kubernetes.io/projected/9aab53d9-b682-4d53-8d5e-8fd0498411e6-kube-api-access-hh7cq\") pod \"cert-manager-5b446d88c5-hxhpx\" (UID: \"9aab53d9-b682-4d53-8d5e-8fd0498411e6\") " pod="cert-manager/cert-manager-5b446d88c5-hxhpx" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.582961 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58xjv\" (UniqueName: \"kubernetes.io/projected/e3fe3585-4df1-4fdc-ab0f-7c9c4ed0e6de-kube-api-access-58xjv\") pod \"cert-manager-webhook-5655c58dd6-cq45f\" (UID: \"e3fe3585-4df1-4fdc-ab0f-7c9c4ed0e6de\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-cq45f" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.604602 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2w9k\" (UniqueName: \"kubernetes.io/projected/ee885bfb-fd81-472f-8de7-2a64130e0141-kube-api-access-g2w9k\") pod \"cert-manager-cainjector-7f985d654d-tmmls\" (UID: \"ee885bfb-fd81-472f-8de7-2a64130e0141\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-tmmls" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.604960 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-tmmls" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.606290 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh7cq\" (UniqueName: \"kubernetes.io/projected/9aab53d9-b682-4d53-8d5e-8fd0498411e6-kube-api-access-hh7cq\") pod \"cert-manager-5b446d88c5-hxhpx\" (UID: \"9aab53d9-b682-4d53-8d5e-8fd0498411e6\") " pod="cert-manager/cert-manager-5b446d88c5-hxhpx" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.611407 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-hxhpx" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.616540 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58xjv\" (UniqueName: \"kubernetes.io/projected/e3fe3585-4df1-4fdc-ab0f-7c9c4ed0e6de-kube-api-access-58xjv\") pod \"cert-manager-webhook-5655c58dd6-cq45f\" (UID: \"e3fe3585-4df1-4fdc-ab0f-7c9c4ed0e6de\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-cq45f" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.661503 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-cq45f" Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.821534 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-tmmls"] Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.834542 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.857333 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hxhpx"] Nov 28 11:16:50 crc kubenswrapper[4772]: W1128 11:16:50.860022 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9aab53d9_b682_4d53_8d5e_8fd0498411e6.slice/crio-5a478ab940f61f300d19854cad82124ffa87342aca90d04f84bc7bed5ace8261 WatchSource:0}: Error finding container 5a478ab940f61f300d19854cad82124ffa87342aca90d04f84bc7bed5ace8261: Status 404 returned error can't find the container with id 5a478ab940f61f300d19854cad82124ffa87342aca90d04f84bc7bed5ace8261 Nov 28 11:16:50 crc kubenswrapper[4772]: I1128 11:16:50.918631 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-cq45f"] Nov 28 11:16:50 crc kubenswrapper[4772]: W1128 11:16:50.922286 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3fe3585_4df1_4fdc_ab0f_7c9c4ed0e6de.slice/crio-a0ec85a2b4ed019c3e80b2db234797b6555745ac02d9ae5fca87b356437cb38c WatchSource:0}: Error finding container a0ec85a2b4ed019c3e80b2db234797b6555745ac02d9ae5fca87b356437cb38c: Status 404 returned error can't find the container with id a0ec85a2b4ed019c3e80b2db234797b6555745ac02d9ae5fca87b356437cb38c Nov 28 11:16:51 crc kubenswrapper[4772]: I1128 11:16:51.659154 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-hxhpx" event={"ID":"9aab53d9-b682-4d53-8d5e-8fd0498411e6","Type":"ContainerStarted","Data":"5a478ab940f61f300d19854cad82124ffa87342aca90d04f84bc7bed5ace8261"} Nov 28 11:16:51 crc kubenswrapper[4772]: I1128 11:16:51.661469 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-tmmls" event={"ID":"ee885bfb-fd81-472f-8de7-2a64130e0141","Type":"ContainerStarted","Data":"3843197f360f87833e70022edea2dd7b54021571a96c19df940841c0d4b6fba4"} Nov 28 11:16:51 crc kubenswrapper[4772]: I1128 11:16:51.662595 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-cq45f" event={"ID":"e3fe3585-4df1-4fdc-ab0f-7c9c4ed0e6de","Type":"ContainerStarted","Data":"a0ec85a2b4ed019c3e80b2db234797b6555745ac02d9ae5fca87b356437cb38c"} Nov 28 11:16:53 crc kubenswrapper[4772]: I1128 11:16:53.896732 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:16:53 crc kubenswrapper[4772]: I1128 11:16:53.896787 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:16:54 crc kubenswrapper[4772]: I1128 11:16:54.679168 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-tmmls" event={"ID":"ee885bfb-fd81-472f-8de7-2a64130e0141","Type":"ContainerStarted","Data":"e703c752538e72af1bd361e25c5963dd5d1b1caa63e3247cfc133461ca740ea3"} Nov 28 11:16:54 crc kubenswrapper[4772]: I1128 11:16:54.680406 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-cq45f" event={"ID":"e3fe3585-4df1-4fdc-ab0f-7c9c4ed0e6de","Type":"ContainerStarted","Data":"b566604878d611331e3bce16f042831ed316eaf0acaeb9b7519be47e9460bd2a"} Nov 28 11:16:54 crc kubenswrapper[4772]: I1128 11:16:54.680533 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-cq45f" Nov 28 11:16:54 crc kubenswrapper[4772]: I1128 11:16:54.681382 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-hxhpx" event={"ID":"9aab53d9-b682-4d53-8d5e-8fd0498411e6","Type":"ContainerStarted","Data":"a405d7816e34fab74e819f71c57a55925ff3e2b6b31028b1da90f530b51a70bf"} Nov 28 11:16:54 crc kubenswrapper[4772]: I1128 11:16:54.695752 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-tmmls" podStartSLOduration=1.358264652 podStartE2EDuration="4.695728157s" podCreationTimestamp="2025-11-28 11:16:50 +0000 UTC" firstStartedPulling="2025-11-28 11:16:50.834209403 +0000 UTC m=+609.157452630" lastFinishedPulling="2025-11-28 11:16:54.171672888 +0000 UTC m=+612.494916135" observedRunningTime="2025-11-28 11:16:54.691089876 +0000 UTC m=+613.014333103" watchObservedRunningTime="2025-11-28 11:16:54.695728157 +0000 UTC m=+613.018971384" Nov 28 11:16:54 crc kubenswrapper[4772]: I1128 11:16:54.707028 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-cq45f" podStartSLOduration=1.4282145800000001 podStartE2EDuration="4.707006516s" podCreationTimestamp="2025-11-28 11:16:50 +0000 UTC" firstStartedPulling="2025-11-28 11:16:50.924051473 +0000 UTC m=+609.247294700" lastFinishedPulling="2025-11-28 11:16:54.202843399 +0000 UTC m=+612.526086636" observedRunningTime="2025-11-28 11:16:54.70644431 +0000 UTC m=+613.029687547" watchObservedRunningTime="2025-11-28 11:16:54.707006516 +0000 UTC m=+613.030249753" Nov 28 11:16:54 crc kubenswrapper[4772]: I1128 11:16:54.726159 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-hxhpx" podStartSLOduration=1.423773704 podStartE2EDuration="4.726141037s" podCreationTimestamp="2025-11-28 11:16:50 +0000 UTC" firstStartedPulling="2025-11-28 11:16:50.862283257 +0000 UTC m=+609.185526484" lastFinishedPulling="2025-11-28 11:16:54.16465059 +0000 UTC m=+612.487893817" observedRunningTime="2025-11-28 11:16:54.723222445 +0000 UTC m=+613.046465692" watchObservedRunningTime="2025-11-28 11:16:54.726141037 +0000 UTC m=+613.049384274" Nov 28 11:17:00 crc kubenswrapper[4772]: I1128 11:17:00.666462 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-cq45f" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.058265 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b7vdn"] Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.059100 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovn-controller" containerID="cri-o://bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79" gracePeriod=30 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.059224 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="nbdb" containerID="cri-o://c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855" gracePeriod=30 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.059470 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="sbdb" containerID="cri-o://85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a" gracePeriod=30 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.059588 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="kube-rbac-proxy-node" containerID="cri-o://41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9" gracePeriod=30 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.059669 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="northd" containerID="cri-o://1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb" gracePeriod=30 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.059675 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630" gracePeriod=30 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.059661 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovn-acl-logging" containerID="cri-o://c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201" gracePeriod=30 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.094963 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" containerID="cri-o://a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d" gracePeriod=30 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.410188 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/3.log" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.412574 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovn-acl-logging/0.log" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.413050 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovn-controller/0.log" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.421073 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472284 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5lxc4"] Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472580 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="kubecfg-setup" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472601 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="kubecfg-setup" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472617 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472628 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472635 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="nbdb" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472644 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="nbdb" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472656 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="northd" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472662 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="northd" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472669 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovn-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472675 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovn-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472689 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472695 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472705 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="sbdb" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472712 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="sbdb" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472744 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472754 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472764 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="kube-rbac-proxy-node" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472772 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="kube-rbac-proxy-node" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472780 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472786 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472793 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovn-acl-logging" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472799 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovn-acl-logging" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472805 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472812 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.472819 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472827 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472943 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="kube-rbac-proxy-node" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472954 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovn-acl-logging" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472962 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472969 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472977 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="northd" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472983 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.472991 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="nbdb" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.473021 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="sbdb" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.473028 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovn-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.473036 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.473044 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.473260 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerName="ovnkube-controller" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.475576 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622526 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-run-ovn-kubernetes\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622587 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j87wl\" (UniqueName: \"kubernetes.io/projected/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-kube-api-access-j87wl\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622611 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-node-log\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622643 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-etc-openvswitch\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622666 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-env-overrides\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622679 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-var-lib-openvswitch\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622678 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622694 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-node-log" (OuterVolumeSpecName: "node-log") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622710 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622741 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622753 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-slash\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622772 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622781 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-cni-bin\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622793 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622806 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-config\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622816 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-slash" (OuterVolumeSpecName: "host-slash") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622825 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-systemd-units\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622846 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-kubelet\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622846 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622870 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-openvswitch\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622881 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622901 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-run-netns\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622906 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622923 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-script-lib\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622929 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622949 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-log-socket\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.622952 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623068 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623098 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-ovn\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623120 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-log-socket" (OuterVolumeSpecName: "log-socket") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623130 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-cni-netd\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623158 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovn-node-metrics-cert\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623183 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-systemd\") pod \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\" (UID: \"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a\") " Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623157 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623176 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623199 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623280 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-run-openvswitch\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623311 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-run-ovn\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623335 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-run-netns\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623375 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623399 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-log-socket\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623422 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61890b15-238d-4f7d-8554-a0e5995a437c-env-overrides\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623439 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623446 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61890b15-238d-4f7d-8554-a0e5995a437c-ovn-node-metrics-cert\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623510 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-run-systemd\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623535 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-var-lib-openvswitch\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623562 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-kubelet\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623607 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-cni-bin\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623727 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-etc-openvswitch\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623764 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-systemd-units\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623818 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-cni-netd\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623833 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-slash\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623857 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74vk\" (UniqueName: \"kubernetes.io/projected/61890b15-238d-4f7d-8554-a0e5995a437c-kube-api-access-m74vk\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623875 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61890b15-238d-4f7d-8554-a0e5995a437c-ovnkube-script-lib\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623891 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-node-log\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623914 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-run-ovn-kubernetes\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.623954 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61890b15-238d-4f7d-8554-a0e5995a437c-ovnkube-config\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624020 4772 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-log-socket\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624032 4772 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624041 4772 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624053 4772 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624065 4772 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-node-log\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624073 4772 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624082 4772 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624090 4772 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624099 4772 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624108 4772 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-slash\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624117 4772 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624126 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624134 4772 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624143 4772 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624152 4772 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624160 4772 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.624169 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.628198 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-kube-api-access-j87wl" (OuterVolumeSpecName: "kube-api-access-j87wl") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "kube-api-access-j87wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.628621 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.636592 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" (UID: "52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.718104 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qsnnj_a4e5807b-7c14-477e-af8b-1260b997ff17/kube-multus/2.log" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.718529 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qsnnj_a4e5807b-7c14-477e-af8b-1260b997ff17/kube-multus/1.log" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.718561 4772 generic.go:334] "Generic (PLEG): container finished" podID="a4e5807b-7c14-477e-af8b-1260b997ff17" containerID="252b4a3f25f72207c739fb18e3bec006c661da277d345c7af2069279d0879002" exitCode=2 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.718610 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qsnnj" event={"ID":"a4e5807b-7c14-477e-af8b-1260b997ff17","Type":"ContainerDied","Data":"252b4a3f25f72207c739fb18e3bec006c661da277d345c7af2069279d0879002"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.718640 4772 scope.go:117] "RemoveContainer" containerID="125d71e6561215a264909d21c3847fb2269b14c5933345eee667243e9cbf3a4a" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.719232 4772 scope.go:117] "RemoveContainer" containerID="252b4a3f25f72207c739fb18e3bec006c661da277d345c7af2069279d0879002" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.719646 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-qsnnj_openshift-multus(a4e5807b-7c14-477e-af8b-1260b997ff17)\"" pod="openshift-multus/multus-qsnnj" podUID="a4e5807b-7c14-477e-af8b-1260b997ff17" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.722124 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovnkube-controller/3.log" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724400 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61890b15-238d-4f7d-8554-a0e5995a437c-ovnkube-script-lib\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724446 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-node-log\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724468 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-run-ovn-kubernetes\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724492 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61890b15-238d-4f7d-8554-a0e5995a437c-ovnkube-config\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724525 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-run-openvswitch\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724539 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-run-ovn\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724556 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-run-netns\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724576 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724597 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-log-socket\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724609 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovn-acl-logging/0.log" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724617 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61890b15-238d-4f7d-8554-a0e5995a437c-env-overrides\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724756 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61890b15-238d-4f7d-8554-a0e5995a437c-ovn-node-metrics-cert\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724814 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-run-systemd\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724851 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-var-lib-openvswitch\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724893 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-kubelet\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.724949 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-cni-bin\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725102 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-etc-openvswitch\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725149 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-systemd-units\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725229 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-cni-netd\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725267 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-slash\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725309 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m74vk\" (UniqueName: \"kubernetes.io/projected/61890b15-238d-4f7d-8554-a0e5995a437c-kube-api-access-m74vk\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725429 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725461 4772 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725483 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j87wl\" (UniqueName: \"kubernetes.io/projected/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a-kube-api-access-j87wl\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725573 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-run-openvswitch\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725571 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-run-netns\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725601 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-run-ovn\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725644 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725668 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61890b15-238d-4f7d-8554-a0e5995a437c-ovnkube-script-lib\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725712 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-log-socket\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725743 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-systemd-units\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725776 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-node-log\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725800 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-run-ovn-kubernetes\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725152 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61890b15-238d-4f7d-8554-a0e5995a437c-env-overrides\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725886 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-cni-netd\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725931 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-etc-openvswitch\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725934 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-slash\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.725967 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-var-lib-openvswitch\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726001 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-run-systemd\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726028 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-kubelet\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726058 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61890b15-238d-4f7d-8554-a0e5995a437c-host-cni-bin\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726386 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61890b15-238d-4f7d-8554-a0e5995a437c-ovnkube-config\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726511 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7vdn_52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/ovn-controller/0.log" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726840 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d" exitCode=0 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726863 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a" exitCode=0 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726872 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855" exitCode=0 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726882 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb" exitCode=0 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726890 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630" exitCode=0 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726898 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9" exitCode=0 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726905 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201" exitCode=143 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726913 4772 generic.go:334] "Generic (PLEG): container finished" podID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" containerID="bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79" exitCode=143 Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726932 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726956 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726970 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726982 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.726992 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727003 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727014 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727025 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727032 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727039 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727046 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727052 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727058 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727064 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727070 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727086 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727096 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727107 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727115 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727121 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727129 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727135 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727141 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727146 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727152 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727159 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727165 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727174 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727183 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727190 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727197 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727203 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727209 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727215 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727221 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727229 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727235 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727242 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727251 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" event={"ID":"52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a","Type":"ContainerDied","Data":"28841ba68f77615df84f63141d03539694a1af2a72e0eafbf151ad7573ac556a"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727261 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727268 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727275 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727282 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727288 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727295 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727302 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727308 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727314 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727321 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1"} Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.727431 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b7vdn" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.730304 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61890b15-238d-4f7d-8554-a0e5995a437c-ovn-node-metrics-cert\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.743890 4772 scope.go:117] "RemoveContainer" containerID="a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.748598 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m74vk\" (UniqueName: \"kubernetes.io/projected/61890b15-238d-4f7d-8554-a0e5995a437c-kube-api-access-m74vk\") pod \"ovnkube-node-5lxc4\" (UID: \"61890b15-238d-4f7d-8554-a0e5995a437c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.763930 4772 scope.go:117] "RemoveContainer" containerID="1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.769588 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b7vdn"] Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.778756 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b7vdn"] Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.786224 4772 scope.go:117] "RemoveContainer" containerID="85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.790913 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.805610 4772 scope.go:117] "RemoveContainer" containerID="c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.818544 4772 scope.go:117] "RemoveContainer" containerID="1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.834544 4772 scope.go:117] "RemoveContainer" containerID="d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.849049 4772 scope.go:117] "RemoveContainer" containerID="41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.878045 4772 scope.go:117] "RemoveContainer" containerID="c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.909951 4772 scope.go:117] "RemoveContainer" containerID="bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.922021 4772 scope.go:117] "RemoveContainer" containerID="d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.935844 4772 scope.go:117] "RemoveContainer" containerID="a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.936181 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d\": container with ID starting with a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d not found: ID does not exist" containerID="a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.936234 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d"} err="failed to get container status \"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d\": rpc error: code = NotFound desc = could not find container \"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d\": container with ID starting with a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.936261 4772 scope.go:117] "RemoveContainer" containerID="1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.936533 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\": container with ID starting with 1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f not found: ID does not exist" containerID="1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.936559 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f"} err="failed to get container status \"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\": rpc error: code = NotFound desc = could not find container \"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\": container with ID starting with 1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.936576 4772 scope.go:117] "RemoveContainer" containerID="85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.936826 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\": container with ID starting with 85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a not found: ID does not exist" containerID="85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.936856 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a"} err="failed to get container status \"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\": rpc error: code = NotFound desc = could not find container \"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\": container with ID starting with 85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.936881 4772 scope.go:117] "RemoveContainer" containerID="c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.938557 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\": container with ID starting with c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855 not found: ID does not exist" containerID="c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.938605 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855"} err="failed to get container status \"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\": rpc error: code = NotFound desc = could not find container \"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\": container with ID starting with c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.938633 4772 scope.go:117] "RemoveContainer" containerID="1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.939098 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\": container with ID starting with 1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb not found: ID does not exist" containerID="1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.939121 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb"} err="failed to get container status \"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\": rpc error: code = NotFound desc = could not find container \"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\": container with ID starting with 1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.939134 4772 scope.go:117] "RemoveContainer" containerID="d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.939320 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\": container with ID starting with d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630 not found: ID does not exist" containerID="d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.939377 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630"} err="failed to get container status \"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\": rpc error: code = NotFound desc = could not find container \"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\": container with ID starting with d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.939394 4772 scope.go:117] "RemoveContainer" containerID="41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.939739 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\": container with ID starting with 41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9 not found: ID does not exist" containerID="41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.939764 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9"} err="failed to get container status \"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\": rpc error: code = NotFound desc = could not find container \"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\": container with ID starting with 41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.939784 4772 scope.go:117] "RemoveContainer" containerID="c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.940002 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\": container with ID starting with c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201 not found: ID does not exist" containerID="c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.940027 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201"} err="failed to get container status \"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\": rpc error: code = NotFound desc = could not find container \"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\": container with ID starting with c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.940041 4772 scope.go:117] "RemoveContainer" containerID="bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.940281 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\": container with ID starting with bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79 not found: ID does not exist" containerID="bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.940299 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79"} err="failed to get container status \"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\": rpc error: code = NotFound desc = could not find container \"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\": container with ID starting with bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.940311 4772 scope.go:117] "RemoveContainer" containerID="d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1" Nov 28 11:17:01 crc kubenswrapper[4772]: E1128 11:17:01.941410 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\": container with ID starting with d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1 not found: ID does not exist" containerID="d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.941453 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1"} err="failed to get container status \"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\": rpc error: code = NotFound desc = could not find container \"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\": container with ID starting with d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.941479 4772 scope.go:117] "RemoveContainer" containerID="a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.941809 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d"} err="failed to get container status \"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d\": rpc error: code = NotFound desc = could not find container \"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d\": container with ID starting with a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.941831 4772 scope.go:117] "RemoveContainer" containerID="1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.942167 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f"} err="failed to get container status \"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\": rpc error: code = NotFound desc = could not find container \"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\": container with ID starting with 1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.942191 4772 scope.go:117] "RemoveContainer" containerID="85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.942457 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a"} err="failed to get container status \"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\": rpc error: code = NotFound desc = could not find container \"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\": container with ID starting with 85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.942514 4772 scope.go:117] "RemoveContainer" containerID="c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.943086 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855"} err="failed to get container status \"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\": rpc error: code = NotFound desc = could not find container \"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\": container with ID starting with c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.943107 4772 scope.go:117] "RemoveContainer" containerID="1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.943478 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb"} err="failed to get container status \"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\": rpc error: code = NotFound desc = could not find container \"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\": container with ID starting with 1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.943496 4772 scope.go:117] "RemoveContainer" containerID="d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.943689 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630"} err="failed to get container status \"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\": rpc error: code = NotFound desc = could not find container \"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\": container with ID starting with d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.943714 4772 scope.go:117] "RemoveContainer" containerID="41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.943949 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9"} err="failed to get container status \"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\": rpc error: code = NotFound desc = could not find container \"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\": container with ID starting with 41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.943972 4772 scope.go:117] "RemoveContainer" containerID="c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.944697 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201"} err="failed to get container status \"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\": rpc error: code = NotFound desc = could not find container \"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\": container with ID starting with c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.944720 4772 scope.go:117] "RemoveContainer" containerID="bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.944976 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79"} err="failed to get container status \"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\": rpc error: code = NotFound desc = could not find container \"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\": container with ID starting with bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.945003 4772 scope.go:117] "RemoveContainer" containerID="d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.945278 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1"} err="failed to get container status \"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\": rpc error: code = NotFound desc = could not find container \"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\": container with ID starting with d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.945294 4772 scope.go:117] "RemoveContainer" containerID="a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.945532 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d"} err="failed to get container status \"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d\": rpc error: code = NotFound desc = could not find container \"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d\": container with ID starting with a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.945546 4772 scope.go:117] "RemoveContainer" containerID="1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.945707 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f"} err="failed to get container status \"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\": rpc error: code = NotFound desc = could not find container \"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\": container with ID starting with 1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.945726 4772 scope.go:117] "RemoveContainer" containerID="85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.945914 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a"} err="failed to get container status \"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\": rpc error: code = NotFound desc = could not find container \"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\": container with ID starting with 85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.945928 4772 scope.go:117] "RemoveContainer" containerID="c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.946195 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855"} err="failed to get container status \"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\": rpc error: code = NotFound desc = could not find container \"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\": container with ID starting with c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.946223 4772 scope.go:117] "RemoveContainer" containerID="1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.946521 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb"} err="failed to get container status \"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\": rpc error: code = NotFound desc = could not find container \"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\": container with ID starting with 1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.946537 4772 scope.go:117] "RemoveContainer" containerID="d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.946711 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630"} err="failed to get container status \"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\": rpc error: code = NotFound desc = could not find container \"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\": container with ID starting with d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.946730 4772 scope.go:117] "RemoveContainer" containerID="41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.947220 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9"} err="failed to get container status \"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\": rpc error: code = NotFound desc = could not find container \"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\": container with ID starting with 41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.947248 4772 scope.go:117] "RemoveContainer" containerID="c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.947482 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201"} err="failed to get container status \"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\": rpc error: code = NotFound desc = could not find container \"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\": container with ID starting with c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.947502 4772 scope.go:117] "RemoveContainer" containerID="bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.947710 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79"} err="failed to get container status \"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\": rpc error: code = NotFound desc = could not find container \"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\": container with ID starting with bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.947727 4772 scope.go:117] "RemoveContainer" containerID="d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.948058 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1"} err="failed to get container status \"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\": rpc error: code = NotFound desc = could not find container \"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\": container with ID starting with d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.948097 4772 scope.go:117] "RemoveContainer" containerID="a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.948596 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d"} err="failed to get container status \"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d\": rpc error: code = NotFound desc = could not find container \"a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d\": container with ID starting with a49b3d8d816d12c4c6166c02ea05d4cc77b61b51cacb1092fe97a6ccc204e61d not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.948633 4772 scope.go:117] "RemoveContainer" containerID="1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.949017 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f"} err="failed to get container status \"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\": rpc error: code = NotFound desc = could not find container \"1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f\": container with ID starting with 1c81c199b6a164ce923d787f7625f0b732af125245b4e27188a3ba951d82af9f not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.949042 4772 scope.go:117] "RemoveContainer" containerID="85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.950173 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a"} err="failed to get container status \"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\": rpc error: code = NotFound desc = could not find container \"85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a\": container with ID starting with 85a28392c6e71539687f7c687dee6aa037ffefc3089f7db8599a1cbe2ee63d3a not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.950192 4772 scope.go:117] "RemoveContainer" containerID="c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.950675 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855"} err="failed to get container status \"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\": rpc error: code = NotFound desc = could not find container \"c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855\": container with ID starting with c21c7764e5212a6c25b1e3905e9af9bef1ef8f406cadcf36b40e59f8000bd855 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.950718 4772 scope.go:117] "RemoveContainer" containerID="1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.951108 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb"} err="failed to get container status \"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\": rpc error: code = NotFound desc = could not find container \"1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb\": container with ID starting with 1aa74ce35734ed5bdd3c234ff650a11d27a609af733a442f7164b503226a20fb not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.951141 4772 scope.go:117] "RemoveContainer" containerID="d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.951646 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630"} err="failed to get container status \"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\": rpc error: code = NotFound desc = could not find container \"d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630\": container with ID starting with d3db7b62e70f7b1bef75f292e5482729a5d2f2857bac90c00b1b7f9a2d24a630 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.951685 4772 scope.go:117] "RemoveContainer" containerID="41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.951975 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9"} err="failed to get container status \"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\": rpc error: code = NotFound desc = could not find container \"41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9\": container with ID starting with 41815ac101773634ee994d8ffe4a73bfb0371c83d5c7a63e5c5d9a6f9a6642c9 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.952002 4772 scope.go:117] "RemoveContainer" containerID="c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.952514 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201"} err="failed to get container status \"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\": rpc error: code = NotFound desc = could not find container \"c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201\": container with ID starting with c1d7117e53d2ba3b63a8bdc723f33c93f18f970399e229c12e3e5c7c74812201 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.952549 4772 scope.go:117] "RemoveContainer" containerID="bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.952974 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79"} err="failed to get container status \"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\": rpc error: code = NotFound desc = could not find container \"bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79\": container with ID starting with bf6462d4c2f7b79ebd20371c04be7c77902808d190b7a2a4e30aaa6b52918f79 not found: ID does not exist" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.953080 4772 scope.go:117] "RemoveContainer" containerID="d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1" Nov 28 11:17:01 crc kubenswrapper[4772]: I1128 11:17:01.953700 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1"} err="failed to get container status \"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\": rpc error: code = NotFound desc = could not find container \"d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1\": container with ID starting with d99726f0224681c9fc6206ff06d216be46361008d98c3952953f92b0bf04a2e1 not found: ID does not exist" Nov 28 11:17:02 crc kubenswrapper[4772]: I1128 11:17:02.004487 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a" path="/var/lib/kubelet/pods/52f8704c-e8fc-4a0e-bfd7-94d78ee6f09a/volumes" Nov 28 11:17:02 crc kubenswrapper[4772]: I1128 11:17:02.733859 4772 generic.go:334] "Generic (PLEG): container finished" podID="61890b15-238d-4f7d-8554-a0e5995a437c" containerID="6d7b6ec3e70b67591535b3d7edcec091813bab93b3e51744fc6b17b2616c8753" exitCode=0 Nov 28 11:17:02 crc kubenswrapper[4772]: I1128 11:17:02.733937 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" event={"ID":"61890b15-238d-4f7d-8554-a0e5995a437c","Type":"ContainerDied","Data":"6d7b6ec3e70b67591535b3d7edcec091813bab93b3e51744fc6b17b2616c8753"} Nov 28 11:17:02 crc kubenswrapper[4772]: I1128 11:17:02.734273 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" event={"ID":"61890b15-238d-4f7d-8554-a0e5995a437c","Type":"ContainerStarted","Data":"87e8243e99c2a6013e65d9a27cdb4e1eb6080e2ba31d5cf219acdbbc4a5b049f"} Nov 28 11:17:02 crc kubenswrapper[4772]: I1128 11:17:02.737347 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qsnnj_a4e5807b-7c14-477e-af8b-1260b997ff17/kube-multus/2.log" Nov 28 11:17:03 crc kubenswrapper[4772]: I1128 11:17:03.756614 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" event={"ID":"61890b15-238d-4f7d-8554-a0e5995a437c","Type":"ContainerStarted","Data":"443d0c2e90e792744f6431712fc382bcea27295f5968ec2f343f8b20c9701766"} Nov 28 11:17:03 crc kubenswrapper[4772]: I1128 11:17:03.757221 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" event={"ID":"61890b15-238d-4f7d-8554-a0e5995a437c","Type":"ContainerStarted","Data":"a51922998e31831bfc3f2f441ce4938dae502c0b5bd4aa9f4887b948f719c1c1"} Nov 28 11:17:03 crc kubenswrapper[4772]: I1128 11:17:03.757245 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" event={"ID":"61890b15-238d-4f7d-8554-a0e5995a437c","Type":"ContainerStarted","Data":"26b48e267f00dadbecde75de21d2015b820deec4b20cf184f4373419600e126e"} Nov 28 11:17:03 crc kubenswrapper[4772]: I1128 11:17:03.757265 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" event={"ID":"61890b15-238d-4f7d-8554-a0e5995a437c","Type":"ContainerStarted","Data":"8517b1f19fbf236bed07a049f6cc0575c12bc4e32067bc0c63ae7fb6a17e454d"} Nov 28 11:17:03 crc kubenswrapper[4772]: I1128 11:17:03.757282 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" event={"ID":"61890b15-238d-4f7d-8554-a0e5995a437c","Type":"ContainerStarted","Data":"c34568b0a23cc51d09acfa034e14a7cc33ce93fa5c88acab26e3710b7f9fe526"} Nov 28 11:17:03 crc kubenswrapper[4772]: I1128 11:17:03.757300 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" event={"ID":"61890b15-238d-4f7d-8554-a0e5995a437c","Type":"ContainerStarted","Data":"6cc0457210c4d8024bf74d34099a544c95386b7880d0bab7af6a6297c88af169"} Nov 28 11:17:05 crc kubenswrapper[4772]: I1128 11:17:05.778245 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" event={"ID":"61890b15-238d-4f7d-8554-a0e5995a437c","Type":"ContainerStarted","Data":"e68c398d4e119ea77586fba3d73e3032f50853efe8ce4d195629c418e3b187af"} Nov 28 11:17:08 crc kubenswrapper[4772]: I1128 11:17:08.800825 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" event={"ID":"61890b15-238d-4f7d-8554-a0e5995a437c","Type":"ContainerStarted","Data":"6530796da1c19a57ca6712c925271273aae54e53216b7f2c0ef7327398a26bec"} Nov 28 11:17:08 crc kubenswrapper[4772]: I1128 11:17:08.801392 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:08 crc kubenswrapper[4772]: I1128 11:17:08.801407 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:08 crc kubenswrapper[4772]: I1128 11:17:08.828392 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:08 crc kubenswrapper[4772]: I1128 11:17:08.843659 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" podStartSLOduration=7.843635407 podStartE2EDuration="7.843635407s" podCreationTimestamp="2025-11-28 11:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:17:08.837858154 +0000 UTC m=+627.161101381" watchObservedRunningTime="2025-11-28 11:17:08.843635407 +0000 UTC m=+627.166878654" Nov 28 11:17:09 crc kubenswrapper[4772]: I1128 11:17:09.809122 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:09 crc kubenswrapper[4772]: I1128 11:17:09.853830 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:16 crc kubenswrapper[4772]: I1128 11:17:16.995100 4772 scope.go:117] "RemoveContainer" containerID="252b4a3f25f72207c739fb18e3bec006c661da277d345c7af2069279d0879002" Nov 28 11:17:16 crc kubenswrapper[4772]: E1128 11:17:16.996277 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-qsnnj_openshift-multus(a4e5807b-7c14-477e-af8b-1260b997ff17)\"" pod="openshift-multus/multus-qsnnj" podUID="a4e5807b-7c14-477e-af8b-1260b997ff17" Nov 28 11:17:23 crc kubenswrapper[4772]: I1128 11:17:23.899404 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:17:23 crc kubenswrapper[4772]: I1128 11:17:23.900005 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:17:23 crc kubenswrapper[4772]: I1128 11:17:23.900073 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:17:23 crc kubenswrapper[4772]: I1128 11:17:23.900770 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"719ebb3dbeb04504957f753c3982248f2e3853f40081e16785cf8530808c4dd9"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:17:23 crc kubenswrapper[4772]: I1128 11:17:23.900835 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://719ebb3dbeb04504957f753c3982248f2e3853f40081e16785cf8530808c4dd9" gracePeriod=600 Nov 28 11:17:24 crc kubenswrapper[4772]: I1128 11:17:24.930145 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="719ebb3dbeb04504957f753c3982248f2e3853f40081e16785cf8530808c4dd9" exitCode=0 Nov 28 11:17:24 crc kubenswrapper[4772]: I1128 11:17:24.930208 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"719ebb3dbeb04504957f753c3982248f2e3853f40081e16785cf8530808c4dd9"} Nov 28 11:17:24 crc kubenswrapper[4772]: I1128 11:17:24.930321 4772 scope.go:117] "RemoveContainer" containerID="beef7ccb3e0e2e5ae83a32f266ec8c15aa9fff63861b33defb11415294193bf3" Nov 28 11:17:25 crc kubenswrapper[4772]: I1128 11:17:25.938414 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"113f0827970f37075ba4d848729cf75c46d547c1a460d92b4daa91c0fd781747"} Nov 28 11:17:28 crc kubenswrapper[4772]: I1128 11:17:28.995019 4772 scope.go:117] "RemoveContainer" containerID="252b4a3f25f72207c739fb18e3bec006c661da277d345c7af2069279d0879002" Nov 28 11:17:29 crc kubenswrapper[4772]: I1128 11:17:29.960227 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qsnnj_a4e5807b-7c14-477e-af8b-1260b997ff17/kube-multus/2.log" Nov 28 11:17:29 crc kubenswrapper[4772]: I1128 11:17:29.961194 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qsnnj" event={"ID":"a4e5807b-7c14-477e-af8b-1260b997ff17","Type":"ContainerStarted","Data":"2c1dacefe7336dd2478204fc99263fa2bb653b3193fb24d51d3b279ad828b192"} Nov 28 11:17:31 crc kubenswrapper[4772]: I1128 11:17:31.816744 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5lxc4" Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.425278 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn"] Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.427010 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.429059 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.431232 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn"] Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.508028 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/069b2332-5974-4cab-b15b-dfa1985ebce1-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn\" (UID: \"069b2332-5974-4cab-b15b-dfa1985ebce1\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.508082 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgwpk\" (UniqueName: \"kubernetes.io/projected/069b2332-5974-4cab-b15b-dfa1985ebce1-kube-api-access-rgwpk\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn\" (UID: \"069b2332-5974-4cab-b15b-dfa1985ebce1\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.508105 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/069b2332-5974-4cab-b15b-dfa1985ebce1-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn\" (UID: \"069b2332-5974-4cab-b15b-dfa1985ebce1\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.608669 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgwpk\" (UniqueName: \"kubernetes.io/projected/069b2332-5974-4cab-b15b-dfa1985ebce1-kube-api-access-rgwpk\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn\" (UID: \"069b2332-5974-4cab-b15b-dfa1985ebce1\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.608712 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/069b2332-5974-4cab-b15b-dfa1985ebce1-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn\" (UID: \"069b2332-5974-4cab-b15b-dfa1985ebce1\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.608775 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/069b2332-5974-4cab-b15b-dfa1985ebce1-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn\" (UID: \"069b2332-5974-4cab-b15b-dfa1985ebce1\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.609226 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/069b2332-5974-4cab-b15b-dfa1985ebce1-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn\" (UID: \"069b2332-5974-4cab-b15b-dfa1985ebce1\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.609523 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/069b2332-5974-4cab-b15b-dfa1985ebce1-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn\" (UID: \"069b2332-5974-4cab-b15b-dfa1985ebce1\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.625875 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgwpk\" (UniqueName: \"kubernetes.io/projected/069b2332-5974-4cab-b15b-dfa1985ebce1-kube-api-access-rgwpk\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn\" (UID: \"069b2332-5974-4cab-b15b-dfa1985ebce1\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:40 crc kubenswrapper[4772]: I1128 11:17:40.744349 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:41 crc kubenswrapper[4772]: I1128 11:17:41.171275 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn"] Nov 28 11:17:42 crc kubenswrapper[4772]: I1128 11:17:42.026238 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" event={"ID":"069b2332-5974-4cab-b15b-dfa1985ebce1","Type":"ContainerStarted","Data":"45eaa0b929495f6c5881e7f882c1ff627559cf0a204746ace024cd9a6510dc06"} Nov 28 11:17:43 crc kubenswrapper[4772]: I1128 11:17:43.031508 4772 generic.go:334] "Generic (PLEG): container finished" podID="069b2332-5974-4cab-b15b-dfa1985ebce1" containerID="a621bbbd8af6f6b81ca123775f8341590c83465f76c7279dc52b5339b1e11a41" exitCode=0 Nov 28 11:17:43 crc kubenswrapper[4772]: I1128 11:17:43.031542 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" event={"ID":"069b2332-5974-4cab-b15b-dfa1985ebce1","Type":"ContainerDied","Data":"a621bbbd8af6f6b81ca123775f8341590c83465f76c7279dc52b5339b1e11a41"} Nov 28 11:17:46 crc kubenswrapper[4772]: I1128 11:17:46.053327 4772 generic.go:334] "Generic (PLEG): container finished" podID="069b2332-5974-4cab-b15b-dfa1985ebce1" containerID="aec4c75cd934008a42229f1fd40b33ea0db342c863c37f94cb2cceb7bfe14e76" exitCode=0 Nov 28 11:17:46 crc kubenswrapper[4772]: I1128 11:17:46.053435 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" event={"ID":"069b2332-5974-4cab-b15b-dfa1985ebce1","Type":"ContainerDied","Data":"aec4c75cd934008a42229f1fd40b33ea0db342c863c37f94cb2cceb7bfe14e76"} Nov 28 11:17:47 crc kubenswrapper[4772]: I1128 11:17:47.064764 4772 generic.go:334] "Generic (PLEG): container finished" podID="069b2332-5974-4cab-b15b-dfa1985ebce1" containerID="86abed50c0c3da7745150e3ae2fcd58b4d1515ee3b20b4361ea9d8a0d0336d7f" exitCode=0 Nov 28 11:17:47 crc kubenswrapper[4772]: I1128 11:17:47.064813 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" event={"ID":"069b2332-5974-4cab-b15b-dfa1985ebce1","Type":"ContainerDied","Data":"86abed50c0c3da7745150e3ae2fcd58b4d1515ee3b20b4361ea9d8a0d0336d7f"} Nov 28 11:17:48 crc kubenswrapper[4772]: I1128 11:17:48.343304 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:48 crc kubenswrapper[4772]: I1128 11:17:48.428967 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/069b2332-5974-4cab-b15b-dfa1985ebce1-util\") pod \"069b2332-5974-4cab-b15b-dfa1985ebce1\" (UID: \"069b2332-5974-4cab-b15b-dfa1985ebce1\") " Nov 28 11:17:48 crc kubenswrapper[4772]: I1128 11:17:48.429056 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgwpk\" (UniqueName: \"kubernetes.io/projected/069b2332-5974-4cab-b15b-dfa1985ebce1-kube-api-access-rgwpk\") pod \"069b2332-5974-4cab-b15b-dfa1985ebce1\" (UID: \"069b2332-5974-4cab-b15b-dfa1985ebce1\") " Nov 28 11:17:48 crc kubenswrapper[4772]: I1128 11:17:48.429123 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/069b2332-5974-4cab-b15b-dfa1985ebce1-bundle\") pod \"069b2332-5974-4cab-b15b-dfa1985ebce1\" (UID: \"069b2332-5974-4cab-b15b-dfa1985ebce1\") " Nov 28 11:17:48 crc kubenswrapper[4772]: I1128 11:17:48.430537 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/069b2332-5974-4cab-b15b-dfa1985ebce1-bundle" (OuterVolumeSpecName: "bundle") pod "069b2332-5974-4cab-b15b-dfa1985ebce1" (UID: "069b2332-5974-4cab-b15b-dfa1985ebce1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:17:48 crc kubenswrapper[4772]: I1128 11:17:48.439110 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069b2332-5974-4cab-b15b-dfa1985ebce1-kube-api-access-rgwpk" (OuterVolumeSpecName: "kube-api-access-rgwpk") pod "069b2332-5974-4cab-b15b-dfa1985ebce1" (UID: "069b2332-5974-4cab-b15b-dfa1985ebce1"). InnerVolumeSpecName "kube-api-access-rgwpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:17:48 crc kubenswrapper[4772]: I1128 11:17:48.443580 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/069b2332-5974-4cab-b15b-dfa1985ebce1-util" (OuterVolumeSpecName: "util") pod "069b2332-5974-4cab-b15b-dfa1985ebce1" (UID: "069b2332-5974-4cab-b15b-dfa1985ebce1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:17:48 crc kubenswrapper[4772]: I1128 11:17:48.530373 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgwpk\" (UniqueName: \"kubernetes.io/projected/069b2332-5974-4cab-b15b-dfa1985ebce1-kube-api-access-rgwpk\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:48 crc kubenswrapper[4772]: I1128 11:17:48.530416 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/069b2332-5974-4cab-b15b-dfa1985ebce1-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:48 crc kubenswrapper[4772]: I1128 11:17:48.530429 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/069b2332-5974-4cab-b15b-dfa1985ebce1-util\") on node \"crc\" DevicePath \"\"" Nov 28 11:17:49 crc kubenswrapper[4772]: I1128 11:17:49.079965 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" event={"ID":"069b2332-5974-4cab-b15b-dfa1985ebce1","Type":"ContainerDied","Data":"45eaa0b929495f6c5881e7f882c1ff627559cf0a204746ace024cd9a6510dc06"} Nov 28 11:17:49 crc kubenswrapper[4772]: I1128 11:17:49.080039 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn" Nov 28 11:17:49 crc kubenswrapper[4772]: I1128 11:17:49.080048 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45eaa0b929495f6c5881e7f882c1ff627559cf0a204746ace024cd9a6510dc06" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.480726 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-2mwrb"] Nov 28 11:17:50 crc kubenswrapper[4772]: E1128 11:17:50.481156 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069b2332-5974-4cab-b15b-dfa1985ebce1" containerName="pull" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.481167 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="069b2332-5974-4cab-b15b-dfa1985ebce1" containerName="pull" Nov 28 11:17:50 crc kubenswrapper[4772]: E1128 11:17:50.481182 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069b2332-5974-4cab-b15b-dfa1985ebce1" containerName="util" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.481188 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="069b2332-5974-4cab-b15b-dfa1985ebce1" containerName="util" Nov 28 11:17:50 crc kubenswrapper[4772]: E1128 11:17:50.481196 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069b2332-5974-4cab-b15b-dfa1985ebce1" containerName="extract" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.481201 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="069b2332-5974-4cab-b15b-dfa1985ebce1" containerName="extract" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.481280 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="069b2332-5974-4cab-b15b-dfa1985ebce1" containerName="extract" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.481652 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2mwrb" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.494712 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.495062 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.496269 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2qrv5" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.504504 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-2mwrb"] Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.572539 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2qpl\" (UniqueName: \"kubernetes.io/projected/e1c70388-5a93-4972-ab1f-24e87ab8498e-kube-api-access-g2qpl\") pod \"nmstate-operator-5b5b58f5c8-2mwrb\" (UID: \"e1c70388-5a93-4972-ab1f-24e87ab8498e\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2mwrb" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.673571 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2qpl\" (UniqueName: \"kubernetes.io/projected/e1c70388-5a93-4972-ab1f-24e87ab8498e-kube-api-access-g2qpl\") pod \"nmstate-operator-5b5b58f5c8-2mwrb\" (UID: \"e1c70388-5a93-4972-ab1f-24e87ab8498e\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2mwrb" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.696772 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2qpl\" (UniqueName: \"kubernetes.io/projected/e1c70388-5a93-4972-ab1f-24e87ab8498e-kube-api-access-g2qpl\") pod \"nmstate-operator-5b5b58f5c8-2mwrb\" (UID: \"e1c70388-5a93-4972-ab1f-24e87ab8498e\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2mwrb" Nov 28 11:17:50 crc kubenswrapper[4772]: I1128 11:17:50.796499 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2mwrb" Nov 28 11:17:51 crc kubenswrapper[4772]: I1128 11:17:51.058859 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-2mwrb"] Nov 28 11:17:51 crc kubenswrapper[4772]: W1128 11:17:51.066554 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1c70388_5a93_4972_ab1f_24e87ab8498e.slice/crio-9fa05d27b430452c0895792e04cb604d829fa99c4409285b70d99f91e8d372f2 WatchSource:0}: Error finding container 9fa05d27b430452c0895792e04cb604d829fa99c4409285b70d99f91e8d372f2: Status 404 returned error can't find the container with id 9fa05d27b430452c0895792e04cb604d829fa99c4409285b70d99f91e8d372f2 Nov 28 11:17:51 crc kubenswrapper[4772]: I1128 11:17:51.097695 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2mwrb" event={"ID":"e1c70388-5a93-4972-ab1f-24e87ab8498e","Type":"ContainerStarted","Data":"9fa05d27b430452c0895792e04cb604d829fa99c4409285b70d99f91e8d372f2"} Nov 28 11:17:54 crc kubenswrapper[4772]: I1128 11:17:54.116457 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2mwrb" event={"ID":"e1c70388-5a93-4972-ab1f-24e87ab8498e","Type":"ContainerStarted","Data":"f71a46fdfc289908d0affad216a05bce036fe2caa0156a6866c65ddbb713afd1"} Nov 28 11:17:54 crc kubenswrapper[4772]: I1128 11:17:54.143875 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2mwrb" podStartSLOduration=1.6331545589999998 podStartE2EDuration="4.143820054s" podCreationTimestamp="2025-11-28 11:17:50 +0000 UTC" firstStartedPulling="2025-11-28 11:17:51.079378174 +0000 UTC m=+669.402621421" lastFinishedPulling="2025-11-28 11:17:53.590043689 +0000 UTC m=+671.913286916" observedRunningTime="2025-11-28 11:17:54.139557447 +0000 UTC m=+672.462800674" watchObservedRunningTime="2025-11-28 11:17:54.143820054 +0000 UTC m=+672.467063311" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.710131 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-zj2qc"] Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.711781 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-zj2qc" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.716876 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-x96f9" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.728389 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-zj2qc"] Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.732412 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp"] Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.733082 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.743848 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.746631 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-qqs48"] Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.747897 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.754514 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp"] Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.803070 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56rv8\" (UniqueName: \"kubernetes.io/projected/ee79d6cb-a211-42c9-b669-9e202376834a-kube-api-access-56rv8\") pod \"nmstate-metrics-7f946cbc9-zj2qc\" (UID: \"ee79d6cb-a211-42c9-b669-9e202376834a\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-zj2qc" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.803112 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v64k6\" (UniqueName: \"kubernetes.io/projected/c53fcd9d-4c6c-4829-8caa-3cddd7c60442-kube-api-access-v64k6\") pod \"nmstate-webhook-5f6d4c5ccb-h6rxp\" (UID: \"c53fcd9d-4c6c-4829-8caa-3cddd7c60442\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.803137 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1bec0d53-d3cb-497c-8db1-646796c7194c-nmstate-lock\") pod \"nmstate-handler-qqs48\" (UID: \"1bec0d53-d3cb-497c-8db1-646796c7194c\") " pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.803158 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1bec0d53-d3cb-497c-8db1-646796c7194c-dbus-socket\") pod \"nmstate-handler-qqs48\" (UID: \"1bec0d53-d3cb-497c-8db1-646796c7194c\") " pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.803174 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c53fcd9d-4c6c-4829-8caa-3cddd7c60442-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-h6rxp\" (UID: \"c53fcd9d-4c6c-4829-8caa-3cddd7c60442\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.803189 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkt62\" (UniqueName: \"kubernetes.io/projected/1bec0d53-d3cb-497c-8db1-646796c7194c-kube-api-access-xkt62\") pod \"nmstate-handler-qqs48\" (UID: \"1bec0d53-d3cb-497c-8db1-646796c7194c\") " pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.803213 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1bec0d53-d3cb-497c-8db1-646796c7194c-ovs-socket\") pod \"nmstate-handler-qqs48\" (UID: \"1bec0d53-d3cb-497c-8db1-646796c7194c\") " pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.856833 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw"] Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.857619 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.860940 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.861233 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-5cqkh" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.861868 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.879192 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw"] Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.904296 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/dca0040a-69ac-4ff1-aefa-64b17329697f-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-7glrw\" (UID: \"dca0040a-69ac-4ff1-aefa-64b17329697f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.904893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55hb2\" (UniqueName: \"kubernetes.io/projected/dca0040a-69ac-4ff1-aefa-64b17329697f-kube-api-access-55hb2\") pod \"nmstate-console-plugin-7fbb5f6569-7glrw\" (UID: \"dca0040a-69ac-4ff1-aefa-64b17329697f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.905037 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56rv8\" (UniqueName: \"kubernetes.io/projected/ee79d6cb-a211-42c9-b669-9e202376834a-kube-api-access-56rv8\") pod \"nmstate-metrics-7f946cbc9-zj2qc\" (UID: \"ee79d6cb-a211-42c9-b669-9e202376834a\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-zj2qc" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.905144 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v64k6\" (UniqueName: \"kubernetes.io/projected/c53fcd9d-4c6c-4829-8caa-3cddd7c60442-kube-api-access-v64k6\") pod \"nmstate-webhook-5f6d4c5ccb-h6rxp\" (UID: \"c53fcd9d-4c6c-4829-8caa-3cddd7c60442\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.905272 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1bec0d53-d3cb-497c-8db1-646796c7194c-nmstate-lock\") pod \"nmstate-handler-qqs48\" (UID: \"1bec0d53-d3cb-497c-8db1-646796c7194c\") " pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.905654 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1bec0d53-d3cb-497c-8db1-646796c7194c-nmstate-lock\") pod \"nmstate-handler-qqs48\" (UID: \"1bec0d53-d3cb-497c-8db1-646796c7194c\") " pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.905665 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1bec0d53-d3cb-497c-8db1-646796c7194c-dbus-socket\") pod \"nmstate-handler-qqs48\" (UID: \"1bec0d53-d3cb-497c-8db1-646796c7194c\") " pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.905732 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c53fcd9d-4c6c-4829-8caa-3cddd7c60442-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-h6rxp\" (UID: \"c53fcd9d-4c6c-4829-8caa-3cddd7c60442\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.905754 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkt62\" (UniqueName: \"kubernetes.io/projected/1bec0d53-d3cb-497c-8db1-646796c7194c-kube-api-access-xkt62\") pod \"nmstate-handler-qqs48\" (UID: \"1bec0d53-d3cb-497c-8db1-646796c7194c\") " pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.905779 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca0040a-69ac-4ff1-aefa-64b17329697f-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-7glrw\" (UID: \"dca0040a-69ac-4ff1-aefa-64b17329697f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.905803 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1bec0d53-d3cb-497c-8db1-646796c7194c-ovs-socket\") pod \"nmstate-handler-qqs48\" (UID: \"1bec0d53-d3cb-497c-8db1-646796c7194c\") " pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.905876 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1bec0d53-d3cb-497c-8db1-646796c7194c-ovs-socket\") pod \"nmstate-handler-qqs48\" (UID: \"1bec0d53-d3cb-497c-8db1-646796c7194c\") " pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:00 crc kubenswrapper[4772]: E1128 11:18:00.905988 4772 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 28 11:18:00 crc kubenswrapper[4772]: E1128 11:18:00.906074 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c53fcd9d-4c6c-4829-8caa-3cddd7c60442-tls-key-pair podName:c53fcd9d-4c6c-4829-8caa-3cddd7c60442 nodeName:}" failed. No retries permitted until 2025-11-28 11:18:01.406050633 +0000 UTC m=+679.729293860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c53fcd9d-4c6c-4829-8caa-3cddd7c60442-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-h6rxp" (UID: "c53fcd9d-4c6c-4829-8caa-3cddd7c60442") : secret "openshift-nmstate-webhook" not found Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.906485 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1bec0d53-d3cb-497c-8db1-646796c7194c-dbus-socket\") pod \"nmstate-handler-qqs48\" (UID: \"1bec0d53-d3cb-497c-8db1-646796c7194c\") " pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.922474 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56rv8\" (UniqueName: \"kubernetes.io/projected/ee79d6cb-a211-42c9-b669-9e202376834a-kube-api-access-56rv8\") pod \"nmstate-metrics-7f946cbc9-zj2qc\" (UID: \"ee79d6cb-a211-42c9-b669-9e202376834a\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-zj2qc" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.922892 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v64k6\" (UniqueName: \"kubernetes.io/projected/c53fcd9d-4c6c-4829-8caa-3cddd7c60442-kube-api-access-v64k6\") pod \"nmstate-webhook-5f6d4c5ccb-h6rxp\" (UID: \"c53fcd9d-4c6c-4829-8caa-3cddd7c60442\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" Nov 28 11:18:00 crc kubenswrapper[4772]: I1128 11:18:00.924833 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkt62\" (UniqueName: \"kubernetes.io/projected/1bec0d53-d3cb-497c-8db1-646796c7194c-kube-api-access-xkt62\") pod \"nmstate-handler-qqs48\" (UID: \"1bec0d53-d3cb-497c-8db1-646796c7194c\") " pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.006845 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca0040a-69ac-4ff1-aefa-64b17329697f-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-7glrw\" (UID: \"dca0040a-69ac-4ff1-aefa-64b17329697f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.007164 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/dca0040a-69ac-4ff1-aefa-64b17329697f-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-7glrw\" (UID: \"dca0040a-69ac-4ff1-aefa-64b17329697f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.007187 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55hb2\" (UniqueName: \"kubernetes.io/projected/dca0040a-69ac-4ff1-aefa-64b17329697f-kube-api-access-55hb2\") pod \"nmstate-console-plugin-7fbb5f6569-7glrw\" (UID: \"dca0040a-69ac-4ff1-aefa-64b17329697f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" Nov 28 11:18:01 crc kubenswrapper[4772]: E1128 11:18:01.007161 4772 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 28 11:18:01 crc kubenswrapper[4772]: E1128 11:18:01.007554 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dca0040a-69ac-4ff1-aefa-64b17329697f-plugin-serving-cert podName:dca0040a-69ac-4ff1-aefa-64b17329697f nodeName:}" failed. No retries permitted until 2025-11-28 11:18:01.50753806 +0000 UTC m=+679.830781287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/dca0040a-69ac-4ff1-aefa-64b17329697f-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-7glrw" (UID: "dca0040a-69ac-4ff1-aefa-64b17329697f") : secret "plugin-serving-cert" not found Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.008044 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/dca0040a-69ac-4ff1-aefa-64b17329697f-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-7glrw\" (UID: \"dca0040a-69ac-4ff1-aefa-64b17329697f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.025680 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55hb2\" (UniqueName: \"kubernetes.io/projected/dca0040a-69ac-4ff1-aefa-64b17329697f-kube-api-access-55hb2\") pod \"nmstate-console-plugin-7fbb5f6569-7glrw\" (UID: \"dca0040a-69ac-4ff1-aefa-64b17329697f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.031466 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-zj2qc" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.062239 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8f8f988d8-bfj5n"] Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.062911 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.068288 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.078933 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8f8f988d8-bfj5n"] Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.107942 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5fea419c-6fb9-4343-b8eb-90d25acb1041-oauth-serving-cert\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.107984 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5fea419c-6fb9-4343-b8eb-90d25acb1041-console-oauth-config\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.108062 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5fea419c-6fb9-4343-b8eb-90d25acb1041-service-ca\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.108108 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fea419c-6fb9-4343-b8eb-90d25acb1041-trusted-ca-bundle\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.108134 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fea419c-6fb9-4343-b8eb-90d25acb1041-console-serving-cert\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.108221 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tsr2\" (UniqueName: \"kubernetes.io/projected/5fea419c-6fb9-4343-b8eb-90d25acb1041-kube-api-access-6tsr2\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.109204 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5fea419c-6fb9-4343-b8eb-90d25acb1041-console-config\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.161951 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qqs48" event={"ID":"1bec0d53-d3cb-497c-8db1-646796c7194c","Type":"ContainerStarted","Data":"fb57f530c411e07aba62ad0a337930516459d2035a10f393f11746cd83825179"} Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.211187 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5fea419c-6fb9-4343-b8eb-90d25acb1041-console-oauth-config\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.211228 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5fea419c-6fb9-4343-b8eb-90d25acb1041-oauth-serving-cert\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.211312 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5fea419c-6fb9-4343-b8eb-90d25acb1041-service-ca\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.211350 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fea419c-6fb9-4343-b8eb-90d25acb1041-trusted-ca-bundle\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.211398 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fea419c-6fb9-4343-b8eb-90d25acb1041-console-serving-cert\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.211465 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tsr2\" (UniqueName: \"kubernetes.io/projected/5fea419c-6fb9-4343-b8eb-90d25acb1041-kube-api-access-6tsr2\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.211517 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5fea419c-6fb9-4343-b8eb-90d25acb1041-console-config\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.212903 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5fea419c-6fb9-4343-b8eb-90d25acb1041-console-config\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.213782 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5fea419c-6fb9-4343-b8eb-90d25acb1041-service-ca\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.217644 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5fea419c-6fb9-4343-b8eb-90d25acb1041-oauth-serving-cert\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.220452 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fea419c-6fb9-4343-b8eb-90d25acb1041-trusted-ca-bundle\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.222335 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fea419c-6fb9-4343-b8eb-90d25acb1041-console-serving-cert\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.223295 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5fea419c-6fb9-4343-b8eb-90d25acb1041-console-oauth-config\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.236671 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tsr2\" (UniqueName: \"kubernetes.io/projected/5fea419c-6fb9-4343-b8eb-90d25acb1041-kube-api-access-6tsr2\") pod \"console-8f8f988d8-bfj5n\" (UID: \"5fea419c-6fb9-4343-b8eb-90d25acb1041\") " pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.241710 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-zj2qc"] Nov 28 11:18:01 crc kubenswrapper[4772]: W1128 11:18:01.248110 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee79d6cb_a211_42c9_b669_9e202376834a.slice/crio-c087bccf96d0a37c3d008c1dac1cb598380b1914998474ef86397f7d38a5bd28 WatchSource:0}: Error finding container c087bccf96d0a37c3d008c1dac1cb598380b1914998474ef86397f7d38a5bd28: Status 404 returned error can't find the container with id c087bccf96d0a37c3d008c1dac1cb598380b1914998474ef86397f7d38a5bd28 Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.404799 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.415238 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c53fcd9d-4c6c-4829-8caa-3cddd7c60442-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-h6rxp\" (UID: \"c53fcd9d-4c6c-4829-8caa-3cddd7c60442\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.418828 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c53fcd9d-4c6c-4829-8caa-3cddd7c60442-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-h6rxp\" (UID: \"c53fcd9d-4c6c-4829-8caa-3cddd7c60442\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.517085 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca0040a-69ac-4ff1-aefa-64b17329697f-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-7glrw\" (UID: \"dca0040a-69ac-4ff1-aefa-64b17329697f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.523234 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca0040a-69ac-4ff1-aefa-64b17329697f-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-7glrw\" (UID: \"dca0040a-69ac-4ff1-aefa-64b17329697f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.563615 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8f8f988d8-bfj5n"] Nov 28 11:18:01 crc kubenswrapper[4772]: W1128 11:18:01.568613 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fea419c_6fb9_4343_b8eb_90d25acb1041.slice/crio-e8763fb46fa952e6f2172de0693f91c16fa3c3a19b822baee6d9f887bcdb9ae4 WatchSource:0}: Error finding container e8763fb46fa952e6f2172de0693f91c16fa3c3a19b822baee6d9f887bcdb9ae4: Status 404 returned error can't find the container with id e8763fb46fa952e6f2172de0693f91c16fa3c3a19b822baee6d9f887bcdb9ae4 Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.647672 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.773629 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.821178 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp"] Nov 28 11:18:01 crc kubenswrapper[4772]: I1128 11:18:01.957863 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw"] Nov 28 11:18:01 crc kubenswrapper[4772]: W1128 11:18:01.964773 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddca0040a_69ac_4ff1_aefa_64b17329697f.slice/crio-35a8bacaa3767797467c63e880248c2ae0e4c4b18a4c91007e9c215ef062f86e WatchSource:0}: Error finding container 35a8bacaa3767797467c63e880248c2ae0e4c4b18a4c91007e9c215ef062f86e: Status 404 returned error can't find the container with id 35a8bacaa3767797467c63e880248c2ae0e4c4b18a4c91007e9c215ef062f86e Nov 28 11:18:02 crc kubenswrapper[4772]: I1128 11:18:02.173321 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" event={"ID":"dca0040a-69ac-4ff1-aefa-64b17329697f","Type":"ContainerStarted","Data":"35a8bacaa3767797467c63e880248c2ae0e4c4b18a4c91007e9c215ef062f86e"} Nov 28 11:18:02 crc kubenswrapper[4772]: I1128 11:18:02.175156 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8f8f988d8-bfj5n" event={"ID":"5fea419c-6fb9-4343-b8eb-90d25acb1041","Type":"ContainerStarted","Data":"636a5fa5d09dcd5f075bc169fabaaf99341d44a57359f00ef12e9be11d5a9177"} Nov 28 11:18:02 crc kubenswrapper[4772]: I1128 11:18:02.175207 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8f8f988d8-bfj5n" event={"ID":"5fea419c-6fb9-4343-b8eb-90d25acb1041","Type":"ContainerStarted","Data":"e8763fb46fa952e6f2172de0693f91c16fa3c3a19b822baee6d9f887bcdb9ae4"} Nov 28 11:18:02 crc kubenswrapper[4772]: I1128 11:18:02.176202 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-zj2qc" event={"ID":"ee79d6cb-a211-42c9-b669-9e202376834a","Type":"ContainerStarted","Data":"c087bccf96d0a37c3d008c1dac1cb598380b1914998474ef86397f7d38a5bd28"} Nov 28 11:18:02 crc kubenswrapper[4772]: I1128 11:18:02.177475 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" event={"ID":"c53fcd9d-4c6c-4829-8caa-3cddd7c60442","Type":"ContainerStarted","Data":"4392056a05e6a04967c09d5c61207db10d27bd0e94c2d03020d9eae8870ab5da"} Nov 28 11:18:02 crc kubenswrapper[4772]: I1128 11:18:02.193877 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8f8f988d8-bfj5n" podStartSLOduration=1.193861346 podStartE2EDuration="1.193861346s" podCreationTimestamp="2025-11-28 11:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:18:02.189841535 +0000 UTC m=+680.513084772" watchObservedRunningTime="2025-11-28 11:18:02.193861346 +0000 UTC m=+680.517104573" Nov 28 11:18:05 crc kubenswrapper[4772]: I1128 11:18:05.195691 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-zj2qc" event={"ID":"ee79d6cb-a211-42c9-b669-9e202376834a","Type":"ContainerStarted","Data":"7771dfb375404dcd9bbe2ea03297252cfef1a5c67da2f316b299851f8a5ef236"} Nov 28 11:18:05 crc kubenswrapper[4772]: I1128 11:18:05.198173 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" event={"ID":"c53fcd9d-4c6c-4829-8caa-3cddd7c60442","Type":"ContainerStarted","Data":"9945c131f76039b17605956e793982e5474f738dab4fb9b7bce8ef279db47ec6"} Nov 28 11:18:05 crc kubenswrapper[4772]: I1128 11:18:05.198979 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" Nov 28 11:18:05 crc kubenswrapper[4772]: I1128 11:18:05.201560 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qqs48" event={"ID":"1bec0d53-d3cb-497c-8db1-646796c7194c","Type":"ContainerStarted","Data":"1b1aeb66dbd255b47a603df436c27760a2dfbdf6d0075572643c0ca13855d0cc"} Nov 28 11:18:05 crc kubenswrapper[4772]: I1128 11:18:05.201664 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:05 crc kubenswrapper[4772]: I1128 11:18:05.214175 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" podStartSLOduration=2.943049375 podStartE2EDuration="5.214154748s" podCreationTimestamp="2025-11-28 11:18:00 +0000 UTC" firstStartedPulling="2025-11-28 11:18:01.834088849 +0000 UTC m=+680.157332076" lastFinishedPulling="2025-11-28 11:18:04.105194222 +0000 UTC m=+682.428437449" observedRunningTime="2025-11-28 11:18:05.213104071 +0000 UTC m=+683.536347308" watchObservedRunningTime="2025-11-28 11:18:05.214154748 +0000 UTC m=+683.537397995" Nov 28 11:18:05 crc kubenswrapper[4772]: I1128 11:18:05.229867 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-qqs48" podStartSLOduration=2.244534987 podStartE2EDuration="5.229847691s" podCreationTimestamp="2025-11-28 11:18:00 +0000 UTC" firstStartedPulling="2025-11-28 11:18:01.097514997 +0000 UTC m=+679.420758224" lastFinishedPulling="2025-11-28 11:18:04.082827701 +0000 UTC m=+682.406070928" observedRunningTime="2025-11-28 11:18:05.225780519 +0000 UTC m=+683.549023766" watchObservedRunningTime="2025-11-28 11:18:05.229847691 +0000 UTC m=+683.553090918" Nov 28 11:18:06 crc kubenswrapper[4772]: I1128 11:18:06.209404 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" event={"ID":"dca0040a-69ac-4ff1-aefa-64b17329697f","Type":"ContainerStarted","Data":"0bb3c329211db01c107a51057e65bf0bbea45ed18e4e9abf7f1617d7b7cf264d"} Nov 28 11:18:06 crc kubenswrapper[4772]: I1128 11:18:06.228454 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7glrw" podStartSLOduration=2.536874652 podStartE2EDuration="6.228429017s" podCreationTimestamp="2025-11-28 11:18:00 +0000 UTC" firstStartedPulling="2025-11-28 11:18:01.971495336 +0000 UTC m=+680.294738563" lastFinishedPulling="2025-11-28 11:18:05.663049701 +0000 UTC m=+683.986292928" observedRunningTime="2025-11-28 11:18:06.227308729 +0000 UTC m=+684.550551956" watchObservedRunningTime="2025-11-28 11:18:06.228429017 +0000 UTC m=+684.551672284" Nov 28 11:18:08 crc kubenswrapper[4772]: I1128 11:18:08.235642 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-zj2qc" event={"ID":"ee79d6cb-a211-42c9-b669-9e202376834a","Type":"ContainerStarted","Data":"00f743833c5a08a75b05c3ecf7eac2ad8dcdf3621284ab98fdca9807f1ece67d"} Nov 28 11:18:08 crc kubenswrapper[4772]: I1128 11:18:08.263771 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-zj2qc" podStartSLOduration=2.330120554 podStartE2EDuration="8.263748314s" podCreationTimestamp="2025-11-28 11:18:00 +0000 UTC" firstStartedPulling="2025-11-28 11:18:01.250828534 +0000 UTC m=+679.574071771" lastFinishedPulling="2025-11-28 11:18:07.184456314 +0000 UTC m=+685.507699531" observedRunningTime="2025-11-28 11:18:08.261771734 +0000 UTC m=+686.585014961" watchObservedRunningTime="2025-11-28 11:18:08.263748314 +0000 UTC m=+686.586991551" Nov 28 11:18:11 crc kubenswrapper[4772]: I1128 11:18:11.101829 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-qqs48" Nov 28 11:18:11 crc kubenswrapper[4772]: I1128 11:18:11.405141 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:11 crc kubenswrapper[4772]: I1128 11:18:11.405200 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:11 crc kubenswrapper[4772]: I1128 11:18:11.409755 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:12 crc kubenswrapper[4772]: I1128 11:18:12.265043 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8f8f988d8-bfj5n" Nov 28 11:18:12 crc kubenswrapper[4772]: I1128 11:18:12.338459 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xffg6"] Nov 28 11:18:21 crc kubenswrapper[4772]: I1128 11:18:21.654582 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-h6rxp" Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.533125 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd"] Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.535151 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.537915 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.546403 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd"] Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.593807 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6hk\" (UniqueName: \"kubernetes.io/projected/53a7011a-aa17-41ce-9010-9cc9bb873b56-kube-api-access-jf6hk\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd\" (UID: \"53a7011a-aa17-41ce-9010-9cc9bb873b56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.593893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53a7011a-aa17-41ce-9010-9cc9bb873b56-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd\" (UID: \"53a7011a-aa17-41ce-9010-9cc9bb873b56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.593932 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53a7011a-aa17-41ce-9010-9cc9bb873b56-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd\" (UID: \"53a7011a-aa17-41ce-9010-9cc9bb873b56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.695463 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf6hk\" (UniqueName: \"kubernetes.io/projected/53a7011a-aa17-41ce-9010-9cc9bb873b56-kube-api-access-jf6hk\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd\" (UID: \"53a7011a-aa17-41ce-9010-9cc9bb873b56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.695876 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53a7011a-aa17-41ce-9010-9cc9bb873b56-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd\" (UID: \"53a7011a-aa17-41ce-9010-9cc9bb873b56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.695904 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53a7011a-aa17-41ce-9010-9cc9bb873b56-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd\" (UID: \"53a7011a-aa17-41ce-9010-9cc9bb873b56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.696494 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53a7011a-aa17-41ce-9010-9cc9bb873b56-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd\" (UID: \"53a7011a-aa17-41ce-9010-9cc9bb873b56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.696498 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53a7011a-aa17-41ce-9010-9cc9bb873b56-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd\" (UID: \"53a7011a-aa17-41ce-9010-9cc9bb873b56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.714505 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf6hk\" (UniqueName: \"kubernetes.io/projected/53a7011a-aa17-41ce-9010-9cc9bb873b56-kube-api-access-jf6hk\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd\" (UID: \"53a7011a-aa17-41ce-9010-9cc9bb873b56\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:33 crc kubenswrapper[4772]: I1128 11:18:33.866071 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:34 crc kubenswrapper[4772]: I1128 11:18:34.267976 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd"] Nov 28 11:18:34 crc kubenswrapper[4772]: W1128 11:18:34.273947 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53a7011a_aa17_41ce_9010_9cc9bb873b56.slice/crio-61c180812e67bfbaddcf9555b673795eaf6cd932cae43d212463be8e984554b0 WatchSource:0}: Error finding container 61c180812e67bfbaddcf9555b673795eaf6cd932cae43d212463be8e984554b0: Status 404 returned error can't find the container with id 61c180812e67bfbaddcf9555b673795eaf6cd932cae43d212463be8e984554b0 Nov 28 11:18:34 crc kubenswrapper[4772]: I1128 11:18:34.387495 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" event={"ID":"53a7011a-aa17-41ce-9010-9cc9bb873b56","Type":"ContainerStarted","Data":"61c180812e67bfbaddcf9555b673795eaf6cd932cae43d212463be8e984554b0"} Nov 28 11:18:35 crc kubenswrapper[4772]: I1128 11:18:35.397452 4772 generic.go:334] "Generic (PLEG): container finished" podID="53a7011a-aa17-41ce-9010-9cc9bb873b56" containerID="a9215841aa4c817d07769582f797debd927ea70ee4b41e9c3bfd05aa17683cbc" exitCode=0 Nov 28 11:18:35 crc kubenswrapper[4772]: I1128 11:18:35.397505 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" event={"ID":"53a7011a-aa17-41ce-9010-9cc9bb873b56","Type":"ContainerDied","Data":"a9215841aa4c817d07769582f797debd927ea70ee4b41e9c3bfd05aa17683cbc"} Nov 28 11:18:37 crc kubenswrapper[4772]: I1128 11:18:37.381491 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-xffg6" podUID="7c2f8de4-e5c0-493a-b16f-b415832ba9bd" containerName="console" containerID="cri-o://6868b2c91a5af5f3b4d99a1a44aa656d584859a2dcc5c8265444ee4bee3510e5" gracePeriod=15 Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.416050 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xffg6_7c2f8de4-e5c0-493a-b16f-b415832ba9bd/console/0.log" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.416599 4772 generic.go:334] "Generic (PLEG): container finished" podID="7c2f8de4-e5c0-493a-b16f-b415832ba9bd" containerID="6868b2c91a5af5f3b4d99a1a44aa656d584859a2dcc5c8265444ee4bee3510e5" exitCode=2 Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.416656 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xffg6" event={"ID":"7c2f8de4-e5c0-493a-b16f-b415832ba9bd","Type":"ContainerDied","Data":"6868b2c91a5af5f3b4d99a1a44aa656d584859a2dcc5c8265444ee4bee3510e5"} Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.416701 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xffg6" event={"ID":"7c2f8de4-e5c0-493a-b16f-b415832ba9bd","Type":"ContainerDied","Data":"abda2fc9fc65e4e7ae6b733d7b84229c89711a50c49d2616fc424bd45e149ef0"} Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.416717 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abda2fc9fc65e4e7ae6b733d7b84229c89711a50c49d2616fc424bd45e149ef0" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.447683 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xffg6_7c2f8de4-e5c0-493a-b16f-b415832ba9bd/console/0.log" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.447763 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.561711 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cngz\" (UniqueName: \"kubernetes.io/projected/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-kube-api-access-6cngz\") pod \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.561789 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-config\") pod \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.561855 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-trusted-ca-bundle\") pod \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.561883 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-oauth-serving-cert\") pod \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.561942 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-oauth-config\") pod \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.562004 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-serving-cert\") pod \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.562628 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7c2f8de4-e5c0-493a-b16f-b415832ba9bd" (UID: "7c2f8de4-e5c0-493a-b16f-b415832ba9bd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.562684 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-config" (OuterVolumeSpecName: "console-config") pod "7c2f8de4-e5c0-493a-b16f-b415832ba9bd" (UID: "7c2f8de4-e5c0-493a-b16f-b415832ba9bd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.562882 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7c2f8de4-e5c0-493a-b16f-b415832ba9bd" (UID: "7c2f8de4-e5c0-493a-b16f-b415832ba9bd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.563375 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-service-ca\") pod \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\" (UID: \"7c2f8de4-e5c0-493a-b16f-b415832ba9bd\") " Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.563798 4772 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.563818 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.563830 4772 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.563980 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-service-ca" (OuterVolumeSpecName: "service-ca") pod "7c2f8de4-e5c0-493a-b16f-b415832ba9bd" (UID: "7c2f8de4-e5c0-493a-b16f-b415832ba9bd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.569388 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7c2f8de4-e5c0-493a-b16f-b415832ba9bd" (UID: "7c2f8de4-e5c0-493a-b16f-b415832ba9bd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.570756 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-kube-api-access-6cngz" (OuterVolumeSpecName: "kube-api-access-6cngz") pod "7c2f8de4-e5c0-493a-b16f-b415832ba9bd" (UID: "7c2f8de4-e5c0-493a-b16f-b415832ba9bd"). InnerVolumeSpecName "kube-api-access-6cngz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.570756 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7c2f8de4-e5c0-493a-b16f-b415832ba9bd" (UID: "7c2f8de4-e5c0-493a-b16f-b415832ba9bd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.664709 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cngz\" (UniqueName: \"kubernetes.io/projected/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-kube-api-access-6cngz\") on node \"crc\" DevicePath \"\"" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.665032 4772 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.665045 4772 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:18:38 crc kubenswrapper[4772]: I1128 11:18:38.665055 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c2f8de4-e5c0-493a-b16f-b415832ba9bd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:18:39 crc kubenswrapper[4772]: I1128 11:18:39.426637 4772 generic.go:334] "Generic (PLEG): container finished" podID="53a7011a-aa17-41ce-9010-9cc9bb873b56" containerID="439de09d31e7097299102671581def9abe045cc44106d2db4049847e50b9e53f" exitCode=0 Nov 28 11:18:39 crc kubenswrapper[4772]: I1128 11:18:39.426727 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xffg6" Nov 28 11:18:39 crc kubenswrapper[4772]: I1128 11:18:39.426715 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" event={"ID":"53a7011a-aa17-41ce-9010-9cc9bb873b56","Type":"ContainerDied","Data":"439de09d31e7097299102671581def9abe045cc44106d2db4049847e50b9e53f"} Nov 28 11:18:39 crc kubenswrapper[4772]: I1128 11:18:39.471341 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xffg6"] Nov 28 11:18:39 crc kubenswrapper[4772]: I1128 11:18:39.480326 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-xffg6"] Nov 28 11:18:40 crc kubenswrapper[4772]: I1128 11:18:40.001922 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c2f8de4-e5c0-493a-b16f-b415832ba9bd" path="/var/lib/kubelet/pods/7c2f8de4-e5c0-493a-b16f-b415832ba9bd/volumes" Nov 28 11:18:40 crc kubenswrapper[4772]: I1128 11:18:40.434750 4772 generic.go:334] "Generic (PLEG): container finished" podID="53a7011a-aa17-41ce-9010-9cc9bb873b56" containerID="d2d78198eb3893362840eb907d3172144fe339bc57252659fb9be05b8925e193" exitCode=0 Nov 28 11:18:40 crc kubenswrapper[4772]: I1128 11:18:40.434792 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" event={"ID":"53a7011a-aa17-41ce-9010-9cc9bb873b56","Type":"ContainerDied","Data":"d2d78198eb3893362840eb907d3172144fe339bc57252659fb9be05b8925e193"} Nov 28 11:18:41 crc kubenswrapper[4772]: I1128 11:18:41.723989 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:41 crc kubenswrapper[4772]: I1128 11:18:41.904078 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53a7011a-aa17-41ce-9010-9cc9bb873b56-util\") pod \"53a7011a-aa17-41ce-9010-9cc9bb873b56\" (UID: \"53a7011a-aa17-41ce-9010-9cc9bb873b56\") " Nov 28 11:18:41 crc kubenswrapper[4772]: I1128 11:18:41.904421 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf6hk\" (UniqueName: \"kubernetes.io/projected/53a7011a-aa17-41ce-9010-9cc9bb873b56-kube-api-access-jf6hk\") pod \"53a7011a-aa17-41ce-9010-9cc9bb873b56\" (UID: \"53a7011a-aa17-41ce-9010-9cc9bb873b56\") " Nov 28 11:18:41 crc kubenswrapper[4772]: I1128 11:18:41.904516 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53a7011a-aa17-41ce-9010-9cc9bb873b56-bundle\") pod \"53a7011a-aa17-41ce-9010-9cc9bb873b56\" (UID: \"53a7011a-aa17-41ce-9010-9cc9bb873b56\") " Nov 28 11:18:41 crc kubenswrapper[4772]: I1128 11:18:41.905738 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53a7011a-aa17-41ce-9010-9cc9bb873b56-bundle" (OuterVolumeSpecName: "bundle") pod "53a7011a-aa17-41ce-9010-9cc9bb873b56" (UID: "53a7011a-aa17-41ce-9010-9cc9bb873b56"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:18:41 crc kubenswrapper[4772]: I1128 11:18:41.910670 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a7011a-aa17-41ce-9010-9cc9bb873b56-kube-api-access-jf6hk" (OuterVolumeSpecName: "kube-api-access-jf6hk") pod "53a7011a-aa17-41ce-9010-9cc9bb873b56" (UID: "53a7011a-aa17-41ce-9010-9cc9bb873b56"). InnerVolumeSpecName "kube-api-access-jf6hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:18:41 crc kubenswrapper[4772]: I1128 11:18:41.925731 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53a7011a-aa17-41ce-9010-9cc9bb873b56-util" (OuterVolumeSpecName: "util") pod "53a7011a-aa17-41ce-9010-9cc9bb873b56" (UID: "53a7011a-aa17-41ce-9010-9cc9bb873b56"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:18:42 crc kubenswrapper[4772]: I1128 11:18:42.005378 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53a7011a-aa17-41ce-9010-9cc9bb873b56-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:18:42 crc kubenswrapper[4772]: I1128 11:18:42.005600 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53a7011a-aa17-41ce-9010-9cc9bb873b56-util\") on node \"crc\" DevicePath \"\"" Nov 28 11:18:42 crc kubenswrapper[4772]: I1128 11:18:42.005660 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf6hk\" (UniqueName: \"kubernetes.io/projected/53a7011a-aa17-41ce-9010-9cc9bb873b56-kube-api-access-jf6hk\") on node \"crc\" DevicePath \"\"" Nov 28 11:18:42 crc kubenswrapper[4772]: I1128 11:18:42.351391 4772 scope.go:117] "RemoveContainer" containerID="6868b2c91a5af5f3b4d99a1a44aa656d584859a2dcc5c8265444ee4bee3510e5" Nov 28 11:18:42 crc kubenswrapper[4772]: I1128 11:18:42.450156 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" event={"ID":"53a7011a-aa17-41ce-9010-9cc9bb873b56","Type":"ContainerDied","Data":"61c180812e67bfbaddcf9555b673795eaf6cd932cae43d212463be8e984554b0"} Nov 28 11:18:42 crc kubenswrapper[4772]: I1128 11:18:42.450209 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61c180812e67bfbaddcf9555b673795eaf6cd932cae43d212463be8e984554b0" Nov 28 11:18:42 crc kubenswrapper[4772]: I1128 11:18:42.450252 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.709876 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l"] Nov 28 11:18:51 crc kubenswrapper[4772]: E1128 11:18:51.710450 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a7011a-aa17-41ce-9010-9cc9bb873b56" containerName="util" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.710462 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a7011a-aa17-41ce-9010-9cc9bb873b56" containerName="util" Nov 28 11:18:51 crc kubenswrapper[4772]: E1128 11:18:51.710473 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a7011a-aa17-41ce-9010-9cc9bb873b56" containerName="pull" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.710478 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a7011a-aa17-41ce-9010-9cc9bb873b56" containerName="pull" Nov 28 11:18:51 crc kubenswrapper[4772]: E1128 11:18:51.710487 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a7011a-aa17-41ce-9010-9cc9bb873b56" containerName="extract" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.710493 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a7011a-aa17-41ce-9010-9cc9bb873b56" containerName="extract" Nov 28 11:18:51 crc kubenswrapper[4772]: E1128 11:18:51.710506 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c2f8de4-e5c0-493a-b16f-b415832ba9bd" containerName="console" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.710511 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c2f8de4-e5c0-493a-b16f-b415832ba9bd" containerName="console" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.710610 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c2f8de4-e5c0-493a-b16f-b415832ba9bd" containerName="console" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.710621 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a7011a-aa17-41ce-9010-9cc9bb873b56" containerName="extract" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.710977 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.737376 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.737390 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bwklt" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.737489 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.737489 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.738271 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.742758 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9e32536-fc25-4e7d-8361-41e61fd188f4-apiservice-cert\") pod \"metallb-operator-controller-manager-656496c9ff-4ql2l\" (UID: \"b9e32536-fc25-4e7d-8361-41e61fd188f4\") " pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.742804 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9e32536-fc25-4e7d-8361-41e61fd188f4-webhook-cert\") pod \"metallb-operator-controller-manager-656496c9ff-4ql2l\" (UID: \"b9e32536-fc25-4e7d-8361-41e61fd188f4\") " pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.742849 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcp4s\" (UniqueName: \"kubernetes.io/projected/b9e32536-fc25-4e7d-8361-41e61fd188f4-kube-api-access-lcp4s\") pod \"metallb-operator-controller-manager-656496c9ff-4ql2l\" (UID: \"b9e32536-fc25-4e7d-8361-41e61fd188f4\") " pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.763920 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l"] Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.843696 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9e32536-fc25-4e7d-8361-41e61fd188f4-webhook-cert\") pod \"metallb-operator-controller-manager-656496c9ff-4ql2l\" (UID: \"b9e32536-fc25-4e7d-8361-41e61fd188f4\") " pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.843774 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcp4s\" (UniqueName: \"kubernetes.io/projected/b9e32536-fc25-4e7d-8361-41e61fd188f4-kube-api-access-lcp4s\") pod \"metallb-operator-controller-manager-656496c9ff-4ql2l\" (UID: \"b9e32536-fc25-4e7d-8361-41e61fd188f4\") " pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.843834 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9e32536-fc25-4e7d-8361-41e61fd188f4-apiservice-cert\") pod \"metallb-operator-controller-manager-656496c9ff-4ql2l\" (UID: \"b9e32536-fc25-4e7d-8361-41e61fd188f4\") " pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.853425 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b9e32536-fc25-4e7d-8361-41e61fd188f4-apiservice-cert\") pod \"metallb-operator-controller-manager-656496c9ff-4ql2l\" (UID: \"b9e32536-fc25-4e7d-8361-41e61fd188f4\") " pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.870141 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9e32536-fc25-4e7d-8361-41e61fd188f4-webhook-cert\") pod \"metallb-operator-controller-manager-656496c9ff-4ql2l\" (UID: \"b9e32536-fc25-4e7d-8361-41e61fd188f4\") " pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:18:51 crc kubenswrapper[4772]: I1128 11:18:51.888120 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcp4s\" (UniqueName: \"kubernetes.io/projected/b9e32536-fc25-4e7d-8361-41e61fd188f4-kube-api-access-lcp4s\") pod \"metallb-operator-controller-manager-656496c9ff-4ql2l\" (UID: \"b9e32536-fc25-4e7d-8361-41e61fd188f4\") " pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.025388 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.034380 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb"] Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.035063 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.036879 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-84mnn" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.037079 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.037999 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.062845 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb"] Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.148149 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c08f594-245d-4b59-9890-c8277ce4229f-apiservice-cert\") pod \"metallb-operator-webhook-server-5b6fc5bcd6-krdrb\" (UID: \"2c08f594-245d-4b59-9890-c8277ce4229f\") " pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.148239 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9b56\" (UniqueName: \"kubernetes.io/projected/2c08f594-245d-4b59-9890-c8277ce4229f-kube-api-access-x9b56\") pod \"metallb-operator-webhook-server-5b6fc5bcd6-krdrb\" (UID: \"2c08f594-245d-4b59-9890-c8277ce4229f\") " pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.148266 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c08f594-245d-4b59-9890-c8277ce4229f-webhook-cert\") pod \"metallb-operator-webhook-server-5b6fc5bcd6-krdrb\" (UID: \"2c08f594-245d-4b59-9890-c8277ce4229f\") " pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.248962 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9b56\" (UniqueName: \"kubernetes.io/projected/2c08f594-245d-4b59-9890-c8277ce4229f-kube-api-access-x9b56\") pod \"metallb-operator-webhook-server-5b6fc5bcd6-krdrb\" (UID: \"2c08f594-245d-4b59-9890-c8277ce4229f\") " pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.249010 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c08f594-245d-4b59-9890-c8277ce4229f-webhook-cert\") pod \"metallb-operator-webhook-server-5b6fc5bcd6-krdrb\" (UID: \"2c08f594-245d-4b59-9890-c8277ce4229f\") " pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.249077 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c08f594-245d-4b59-9890-c8277ce4229f-apiservice-cert\") pod \"metallb-operator-webhook-server-5b6fc5bcd6-krdrb\" (UID: \"2c08f594-245d-4b59-9890-c8277ce4229f\") " pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.253932 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c08f594-245d-4b59-9890-c8277ce4229f-webhook-cert\") pod \"metallb-operator-webhook-server-5b6fc5bcd6-krdrb\" (UID: \"2c08f594-245d-4b59-9890-c8277ce4229f\") " pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.254617 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c08f594-245d-4b59-9890-c8277ce4229f-apiservice-cert\") pod \"metallb-operator-webhook-server-5b6fc5bcd6-krdrb\" (UID: \"2c08f594-245d-4b59-9890-c8277ce4229f\") " pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.267721 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9b56\" (UniqueName: \"kubernetes.io/projected/2c08f594-245d-4b59-9890-c8277ce4229f-kube-api-access-x9b56\") pod \"metallb-operator-webhook-server-5b6fc5bcd6-krdrb\" (UID: \"2c08f594-245d-4b59-9890-c8277ce4229f\") " pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.394719 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.526740 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l"] Nov 28 11:18:52 crc kubenswrapper[4772]: W1128 11:18:52.537747 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9e32536_fc25_4e7d_8361_41e61fd188f4.slice/crio-42995cd419401eedea38a4716745bbf328d0720bf120fbe4c20bcf5b98c366ba WatchSource:0}: Error finding container 42995cd419401eedea38a4716745bbf328d0720bf120fbe4c20bcf5b98c366ba: Status 404 returned error can't find the container with id 42995cd419401eedea38a4716745bbf328d0720bf120fbe4c20bcf5b98c366ba Nov 28 11:18:52 crc kubenswrapper[4772]: I1128 11:18:52.601870 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb"] Nov 28 11:18:52 crc kubenswrapper[4772]: W1128 11:18:52.609976 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c08f594_245d_4b59_9890_c8277ce4229f.slice/crio-1a00e1b9e626fff392eac0fc686dd1c6582db4ef34f357cb59ce554952617dea WatchSource:0}: Error finding container 1a00e1b9e626fff392eac0fc686dd1c6582db4ef34f357cb59ce554952617dea: Status 404 returned error can't find the container with id 1a00e1b9e626fff392eac0fc686dd1c6582db4ef34f357cb59ce554952617dea Nov 28 11:18:53 crc kubenswrapper[4772]: I1128 11:18:53.516561 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" event={"ID":"2c08f594-245d-4b59-9890-c8277ce4229f","Type":"ContainerStarted","Data":"1a00e1b9e626fff392eac0fc686dd1c6582db4ef34f357cb59ce554952617dea"} Nov 28 11:18:53 crc kubenswrapper[4772]: I1128 11:18:53.517491 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" event={"ID":"b9e32536-fc25-4e7d-8361-41e61fd188f4","Type":"ContainerStarted","Data":"42995cd419401eedea38a4716745bbf328d0720bf120fbe4c20bcf5b98c366ba"} Nov 28 11:18:56 crc kubenswrapper[4772]: I1128 11:18:56.535409 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" event={"ID":"b9e32536-fc25-4e7d-8361-41e61fd188f4","Type":"ContainerStarted","Data":"90de9b0ee8e19f68afa82690f3ad4d8628ce98124ce0b53d7a80b0304cb04ef3"} Nov 28 11:18:56 crc kubenswrapper[4772]: I1128 11:18:56.535972 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:18:56 crc kubenswrapper[4772]: I1128 11:18:56.559792 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" podStartSLOduration=2.24070357 podStartE2EDuration="5.559773363s" podCreationTimestamp="2025-11-28 11:18:51 +0000 UTC" firstStartedPulling="2025-11-28 11:18:52.541844136 +0000 UTC m=+730.865087363" lastFinishedPulling="2025-11-28 11:18:55.860913929 +0000 UTC m=+734.184157156" observedRunningTime="2025-11-28 11:18:56.551276333 +0000 UTC m=+734.874519560" watchObservedRunningTime="2025-11-28 11:18:56.559773363 +0000 UTC m=+734.883016590" Nov 28 11:18:58 crc kubenswrapper[4772]: I1128 11:18:58.548119 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" event={"ID":"2c08f594-245d-4b59-9890-c8277ce4229f","Type":"ContainerStarted","Data":"d0261d1d68ef08188a098f0c55538f6c04a84cd9e64243050d0759382f447810"} Nov 28 11:18:58 crc kubenswrapper[4772]: I1128 11:18:58.548425 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:18:58 crc kubenswrapper[4772]: I1128 11:18:58.568988 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" podStartSLOduration=1.7657055910000001 podStartE2EDuration="6.568965546s" podCreationTimestamp="2025-11-28 11:18:52 +0000 UTC" firstStartedPulling="2025-11-28 11:18:52.613032395 +0000 UTC m=+730.936275622" lastFinishedPulling="2025-11-28 11:18:57.41629236 +0000 UTC m=+735.739535577" observedRunningTime="2025-11-28 11:18:58.564598993 +0000 UTC m=+736.887842250" watchObservedRunningTime="2025-11-28 11:18:58.568965546 +0000 UTC m=+736.892208773" Nov 28 11:19:08 crc kubenswrapper[4772]: I1128 11:19:08.219759 4772 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 11:19:12 crc kubenswrapper[4772]: I1128 11:19:12.398762 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5b6fc5bcd6-krdrb" Nov 28 11:19:32 crc kubenswrapper[4772]: I1128 11:19:32.028182 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-656496c9ff-4ql2l" Nov 28 11:19:32 crc kubenswrapper[4772]: I1128 11:19:32.907717 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-c5wb9"] Nov 28 11:19:32 crc kubenswrapper[4772]: I1128 11:19:32.910832 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:32 crc kubenswrapper[4772]: I1128 11:19:32.916962 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr"] Nov 28 11:19:32 crc kubenswrapper[4772]: I1128 11:19:32.917802 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" Nov 28 11:19:32 crc kubenswrapper[4772]: I1128 11:19:32.922614 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 28 11:19:32 crc kubenswrapper[4772]: I1128 11:19:32.922641 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 28 11:19:32 crc kubenswrapper[4772]: I1128 11:19:32.924736 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 28 11:19:32 crc kubenswrapper[4772]: I1128 11:19:32.932806 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-rqzrn" Nov 28 11:19:32 crc kubenswrapper[4772]: I1128 11:19:32.955195 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr"] Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.012844 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-c9tgf"] Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.013737 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.017156 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-d7tl8" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.017627 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.017686 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.020073 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.023648 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-pndk9"] Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.026041 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.027901 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.043926 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-pndk9"] Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.087259 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-metrics\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.087466 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-metrics-certs\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.087495 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfnbw\" (UniqueName: \"kubernetes.io/projected/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-kube-api-access-vfnbw\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.087510 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-frr-conf\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.087550 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqztc\" (UniqueName: \"kubernetes.io/projected/755f7720-2965-444a-887c-b4ab39b4160f-kube-api-access-fqztc\") pod \"frr-k8s-webhook-server-7fcb986d4-h5pwr\" (UID: \"755f7720-2965-444a-887c-b4ab39b4160f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.087583 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/755f7720-2965-444a-887c-b4ab39b4160f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-h5pwr\" (UID: \"755f7720-2965-444a-887c-b4ab39b4160f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.087600 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-reloader\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.087701 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-frr-sockets\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.087722 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-frr-startup\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188571 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-metrics-certs\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188622 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfnbw\" (UniqueName: \"kubernetes.io/projected/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-kube-api-access-vfnbw\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188650 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-frr-conf\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188677 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2ea9b50-b6aa-4700-b238-a66b47d5d070-cert\") pod \"controller-f8648f98b-pndk9\" (UID: \"e2ea9b50-b6aa-4700-b238-a66b47d5d070\") " pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188708 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/175603eb-4244-46dc-98a4-2f8426488c48-metrics-certs\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188734 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqztc\" (UniqueName: \"kubernetes.io/projected/755f7720-2965-444a-887c-b4ab39b4160f-kube-api-access-fqztc\") pod \"frr-k8s-webhook-server-7fcb986d4-h5pwr\" (UID: \"755f7720-2965-444a-887c-b4ab39b4160f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188757 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6wxl\" (UniqueName: \"kubernetes.io/projected/175603eb-4244-46dc-98a4-2f8426488c48-kube-api-access-b6wxl\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188784 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2ea9b50-b6aa-4700-b238-a66b47d5d070-metrics-certs\") pod \"controller-f8648f98b-pndk9\" (UID: \"e2ea9b50-b6aa-4700-b238-a66b47d5d070\") " pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188805 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/755f7720-2965-444a-887c-b4ab39b4160f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-h5pwr\" (UID: \"755f7720-2965-444a-887c-b4ab39b4160f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188824 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/175603eb-4244-46dc-98a4-2f8426488c48-metallb-excludel2\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188842 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-reloader\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188868 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p477b\" (UniqueName: \"kubernetes.io/projected/e2ea9b50-b6aa-4700-b238-a66b47d5d070-kube-api-access-p477b\") pod \"controller-f8648f98b-pndk9\" (UID: \"e2ea9b50-b6aa-4700-b238-a66b47d5d070\") " pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188898 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/175603eb-4244-46dc-98a4-2f8426488c48-memberlist\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188944 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-frr-sockets\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.188983 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-frr-startup\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.189024 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-metrics\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: E1128 11:19:33.189208 4772 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 28 11:19:33 crc kubenswrapper[4772]: E1128 11:19:33.189262 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/755f7720-2965-444a-887c-b4ab39b4160f-cert podName:755f7720-2965-444a-887c-b4ab39b4160f nodeName:}" failed. No retries permitted until 2025-11-28 11:19:33.689244035 +0000 UTC m=+772.012487262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/755f7720-2965-444a-887c-b4ab39b4160f-cert") pod "frr-k8s-webhook-server-7fcb986d4-h5pwr" (UID: "755f7720-2965-444a-887c-b4ab39b4160f") : secret "frr-k8s-webhook-server-cert" not found Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.189379 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-frr-conf\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.189351 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-frr-sockets\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.189686 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-reloader\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.189949 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-metrics\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.190259 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-frr-startup\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.200975 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-metrics-certs\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.203718 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfnbw\" (UniqueName: \"kubernetes.io/projected/8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba-kube-api-access-vfnbw\") pod \"frr-k8s-c5wb9\" (UID: \"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba\") " pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.220106 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqztc\" (UniqueName: \"kubernetes.io/projected/755f7720-2965-444a-887c-b4ab39b4160f-kube-api-access-fqztc\") pod \"frr-k8s-webhook-server-7fcb986d4-h5pwr\" (UID: \"755f7720-2965-444a-887c-b4ab39b4160f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.234306 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.290288 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2ea9b50-b6aa-4700-b238-a66b47d5d070-metrics-certs\") pod \"controller-f8648f98b-pndk9\" (UID: \"e2ea9b50-b6aa-4700-b238-a66b47d5d070\") " pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.290346 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/175603eb-4244-46dc-98a4-2f8426488c48-metallb-excludel2\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.290401 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p477b\" (UniqueName: \"kubernetes.io/projected/e2ea9b50-b6aa-4700-b238-a66b47d5d070-kube-api-access-p477b\") pod \"controller-f8648f98b-pndk9\" (UID: \"e2ea9b50-b6aa-4700-b238-a66b47d5d070\") " pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.290440 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/175603eb-4244-46dc-98a4-2f8426488c48-memberlist\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.290513 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2ea9b50-b6aa-4700-b238-a66b47d5d070-cert\") pod \"controller-f8648f98b-pndk9\" (UID: \"e2ea9b50-b6aa-4700-b238-a66b47d5d070\") " pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.290534 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/175603eb-4244-46dc-98a4-2f8426488c48-metrics-certs\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.290556 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6wxl\" (UniqueName: \"kubernetes.io/projected/175603eb-4244-46dc-98a4-2f8426488c48-kube-api-access-b6wxl\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: E1128 11:19:33.292202 4772 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 11:19:33 crc kubenswrapper[4772]: E1128 11:19:33.292275 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/175603eb-4244-46dc-98a4-2f8426488c48-memberlist podName:175603eb-4244-46dc-98a4-2f8426488c48 nodeName:}" failed. No retries permitted until 2025-11-28 11:19:33.792257279 +0000 UTC m=+772.115500506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/175603eb-4244-46dc-98a4-2f8426488c48-memberlist") pod "speaker-c9tgf" (UID: "175603eb-4244-46dc-98a4-2f8426488c48") : secret "metallb-memberlist" not found Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.293197 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/175603eb-4244-46dc-98a4-2f8426488c48-metallb-excludel2\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.305302 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/175603eb-4244-46dc-98a4-2f8426488c48-metrics-certs\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.305774 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2ea9b50-b6aa-4700-b238-a66b47d5d070-metrics-certs\") pod \"controller-f8648f98b-pndk9\" (UID: \"e2ea9b50-b6aa-4700-b238-a66b47d5d070\") " pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.314781 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2ea9b50-b6aa-4700-b238-a66b47d5d070-cert\") pod \"controller-f8648f98b-pndk9\" (UID: \"e2ea9b50-b6aa-4700-b238-a66b47d5d070\") " pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.324055 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6wxl\" (UniqueName: \"kubernetes.io/projected/175603eb-4244-46dc-98a4-2f8426488c48-kube-api-access-b6wxl\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.324181 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p477b\" (UniqueName: \"kubernetes.io/projected/e2ea9b50-b6aa-4700-b238-a66b47d5d070-kube-api-access-p477b\") pod \"controller-f8648f98b-pndk9\" (UID: \"e2ea9b50-b6aa-4700-b238-a66b47d5d070\") " pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.341179 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.694930 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/755f7720-2965-444a-887c-b4ab39b4160f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-h5pwr\" (UID: \"755f7720-2965-444a-887c-b4ab39b4160f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.701186 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/755f7720-2965-444a-887c-b4ab39b4160f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-h5pwr\" (UID: \"755f7720-2965-444a-887c-b4ab39b4160f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.733692 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-pndk9"] Nov 28 11:19:33 crc kubenswrapper[4772]: W1128 11:19:33.739540 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2ea9b50_b6aa_4700_b238_a66b47d5d070.slice/crio-0db84c5d1249446c6022eb5cd299a8c5108b61628f31f165128d143fd724135e WatchSource:0}: Error finding container 0db84c5d1249446c6022eb5cd299a8c5108b61628f31f165128d143fd724135e: Status 404 returned error can't find the container with id 0db84c5d1249446c6022eb5cd299a8c5108b61628f31f165128d143fd724135e Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.764846 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c5wb9" event={"ID":"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba","Type":"ContainerStarted","Data":"e396dc19a99b869c63ce5c95940d81379843791e69fb113f6eed53c58803a2a3"} Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.766069 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-pndk9" event={"ID":"e2ea9b50-b6aa-4700-b238-a66b47d5d070","Type":"ContainerStarted","Data":"0db84c5d1249446c6022eb5cd299a8c5108b61628f31f165128d143fd724135e"} Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.796883 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/175603eb-4244-46dc-98a4-2f8426488c48-memberlist\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:33 crc kubenswrapper[4772]: E1128 11:19:33.797004 4772 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 11:19:33 crc kubenswrapper[4772]: E1128 11:19:33.797078 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/175603eb-4244-46dc-98a4-2f8426488c48-memberlist podName:175603eb-4244-46dc-98a4-2f8426488c48 nodeName:}" failed. No retries permitted until 2025-11-28 11:19:34.797058985 +0000 UTC m=+773.120302212 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/175603eb-4244-46dc-98a4-2f8426488c48-memberlist") pod "speaker-c9tgf" (UID: "175603eb-4244-46dc-98a4-2f8426488c48") : secret "metallb-memberlist" not found Nov 28 11:19:33 crc kubenswrapper[4772]: I1128 11:19:33.851651 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" Nov 28 11:19:34 crc kubenswrapper[4772]: I1128 11:19:34.059010 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr"] Nov 28 11:19:34 crc kubenswrapper[4772]: W1128 11:19:34.062092 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod755f7720_2965_444a_887c_b4ab39b4160f.slice/crio-88a3af94e2a1dfd44744beb9c4478f9df19d2cbf36899d830f1605198bac253e WatchSource:0}: Error finding container 88a3af94e2a1dfd44744beb9c4478f9df19d2cbf36899d830f1605198bac253e: Status 404 returned error can't find the container with id 88a3af94e2a1dfd44744beb9c4478f9df19d2cbf36899d830f1605198bac253e Nov 28 11:19:34 crc kubenswrapper[4772]: I1128 11:19:34.772085 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-pndk9" event={"ID":"e2ea9b50-b6aa-4700-b238-a66b47d5d070","Type":"ContainerStarted","Data":"3d9a69fd4a06c7d76a031bf9e6449b3a06164ae143a0a18f3dca8e06dd4a3209"} Nov 28 11:19:34 crc kubenswrapper[4772]: I1128 11:19:34.772135 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-pndk9" event={"ID":"e2ea9b50-b6aa-4700-b238-a66b47d5d070","Type":"ContainerStarted","Data":"7decdf5f66e31a60708b6d7455dde5a8051a2ea96707fa026cd1e47eaeb65f44"} Nov 28 11:19:34 crc kubenswrapper[4772]: I1128 11:19:34.772183 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:34 crc kubenswrapper[4772]: I1128 11:19:34.773737 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" event={"ID":"755f7720-2965-444a-887c-b4ab39b4160f","Type":"ContainerStarted","Data":"88a3af94e2a1dfd44744beb9c4478f9df19d2cbf36899d830f1605198bac253e"} Nov 28 11:19:34 crc kubenswrapper[4772]: I1128 11:19:34.803329 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-pndk9" podStartSLOduration=1.8033104309999999 podStartE2EDuration="1.803310431s" podCreationTimestamp="2025-11-28 11:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:19:34.797788337 +0000 UTC m=+773.121031584" watchObservedRunningTime="2025-11-28 11:19:34.803310431 +0000 UTC m=+773.126553658" Nov 28 11:19:34 crc kubenswrapper[4772]: I1128 11:19:34.806947 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/175603eb-4244-46dc-98a4-2f8426488c48-memberlist\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:34 crc kubenswrapper[4772]: I1128 11:19:34.815954 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/175603eb-4244-46dc-98a4-2f8426488c48-memberlist\") pod \"speaker-c9tgf\" (UID: \"175603eb-4244-46dc-98a4-2f8426488c48\") " pod="metallb-system/speaker-c9tgf" Nov 28 11:19:34 crc kubenswrapper[4772]: I1128 11:19:34.828821 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-c9tgf" Nov 28 11:19:34 crc kubenswrapper[4772]: W1128 11:19:34.866416 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod175603eb_4244_46dc_98a4_2f8426488c48.slice/crio-3d7e867b9f14a0285ac876ef73278815271128e5fd70fccd3ba65e052af6062a WatchSource:0}: Error finding container 3d7e867b9f14a0285ac876ef73278815271128e5fd70fccd3ba65e052af6062a: Status 404 returned error can't find the container with id 3d7e867b9f14a0285ac876ef73278815271128e5fd70fccd3ba65e052af6062a Nov 28 11:19:35 crc kubenswrapper[4772]: I1128 11:19:35.790083 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-c9tgf" event={"ID":"175603eb-4244-46dc-98a4-2f8426488c48","Type":"ContainerStarted","Data":"53cc4bd69ecc7ec973695a5cae1f23c8d3b8e72eae330d973ca5b0fb8bb24a30"} Nov 28 11:19:35 crc kubenswrapper[4772]: I1128 11:19:35.791338 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-c9tgf" event={"ID":"175603eb-4244-46dc-98a4-2f8426488c48","Type":"ContainerStarted","Data":"c6664b1d50e757cef63c830bbf3ebb34bbbfe46e970b4efb0208043723172931"} Nov 28 11:19:35 crc kubenswrapper[4772]: I1128 11:19:35.791483 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-c9tgf" event={"ID":"175603eb-4244-46dc-98a4-2f8426488c48","Type":"ContainerStarted","Data":"3d7e867b9f14a0285ac876ef73278815271128e5fd70fccd3ba65e052af6062a"} Nov 28 11:19:35 crc kubenswrapper[4772]: I1128 11:19:35.792453 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-c9tgf" Nov 28 11:19:41 crc kubenswrapper[4772]: I1128 11:19:41.835764 4772 generic.go:334] "Generic (PLEG): container finished" podID="8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba" containerID="ccdf257442cd3e1398d2c184f9f76e058a7ef0c0e2eea26e682855b0f39144fe" exitCode=0 Nov 28 11:19:41 crc kubenswrapper[4772]: I1128 11:19:41.836218 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c5wb9" event={"ID":"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba","Type":"ContainerDied","Data":"ccdf257442cd3e1398d2c184f9f76e058a7ef0c0e2eea26e682855b0f39144fe"} Nov 28 11:19:41 crc kubenswrapper[4772]: I1128 11:19:41.838242 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" event={"ID":"755f7720-2965-444a-887c-b4ab39b4160f","Type":"ContainerStarted","Data":"df96e165aef87976b03c41da0b953c1d844e1f7bf284882c9444ea563833a6a2"} Nov 28 11:19:41 crc kubenswrapper[4772]: I1128 11:19:41.838564 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" Nov 28 11:19:41 crc kubenswrapper[4772]: I1128 11:19:41.864916 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-c9tgf" podStartSLOduration=9.864900919 podStartE2EDuration="9.864900919s" podCreationTimestamp="2025-11-28 11:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:19:35.820630634 +0000 UTC m=+774.143873861" watchObservedRunningTime="2025-11-28 11:19:41.864900919 +0000 UTC m=+780.188144146" Nov 28 11:19:41 crc kubenswrapper[4772]: I1128 11:19:41.888806 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" podStartSLOduration=2.630843404 podStartE2EDuration="9.888781429s" podCreationTimestamp="2025-11-28 11:19:32 +0000 UTC" firstStartedPulling="2025-11-28 11:19:34.06413459 +0000 UTC m=+772.387377817" lastFinishedPulling="2025-11-28 11:19:41.322072615 +0000 UTC m=+779.645315842" observedRunningTime="2025-11-28 11:19:41.881823668 +0000 UTC m=+780.205066905" watchObservedRunningTime="2025-11-28 11:19:41.888781429 +0000 UTC m=+780.212024656" Nov 28 11:19:42 crc kubenswrapper[4772]: I1128 11:19:42.850102 4772 generic.go:334] "Generic (PLEG): container finished" podID="8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba" containerID="7868006ded73e37c8ad7182fc38e45fb55e5d0ef77962fcee0fcb9c1ddabd72a" exitCode=0 Nov 28 11:19:42 crc kubenswrapper[4772]: I1128 11:19:42.850213 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c5wb9" event={"ID":"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba","Type":"ContainerDied","Data":"7868006ded73e37c8ad7182fc38e45fb55e5d0ef77962fcee0fcb9c1ddabd72a"} Nov 28 11:19:43 crc kubenswrapper[4772]: I1128 11:19:43.347188 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-pndk9" Nov 28 11:19:43 crc kubenswrapper[4772]: I1128 11:19:43.863005 4772 generic.go:334] "Generic (PLEG): container finished" podID="8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba" containerID="ff06d29c8b9ba3be98c7a407cd5a4ce3d8575a766aaa69a17604967a35cda042" exitCode=0 Nov 28 11:19:43 crc kubenswrapper[4772]: I1128 11:19:43.863073 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c5wb9" event={"ID":"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba","Type":"ContainerDied","Data":"ff06d29c8b9ba3be98c7a407cd5a4ce3d8575a766aaa69a17604967a35cda042"} Nov 28 11:19:44 crc kubenswrapper[4772]: I1128 11:19:44.872051 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c5wb9" event={"ID":"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba","Type":"ContainerStarted","Data":"bb3f70d6a47b5999a824dac92d7d8b0fc18db389e064eb3c8dfae5ee6b096270"} Nov 28 11:19:44 crc kubenswrapper[4772]: I1128 11:19:44.872292 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c5wb9" event={"ID":"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba","Type":"ContainerStarted","Data":"853d98a4e34451ece1de57d184dffe7d1fcb735f0290898c3121d6826f8beef5"} Nov 28 11:19:44 crc kubenswrapper[4772]: I1128 11:19:44.872303 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c5wb9" event={"ID":"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba","Type":"ContainerStarted","Data":"c0f7cac3e32386f0627fabd9b339644517933549de0e1cec54851b4cc8d731de"} Nov 28 11:19:44 crc kubenswrapper[4772]: I1128 11:19:44.872312 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c5wb9" event={"ID":"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba","Type":"ContainerStarted","Data":"395e1953c468470f239863b27077a027162f2ba160a687e00c09d97ad1e0bb97"} Nov 28 11:19:44 crc kubenswrapper[4772]: I1128 11:19:44.872321 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c5wb9" event={"ID":"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba","Type":"ContainerStarted","Data":"426e74de10125e099a5be39a44a8b23f687633f693a855e0c74c16fccb18ea1a"} Nov 28 11:19:45 crc kubenswrapper[4772]: I1128 11:19:45.884055 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c5wb9" event={"ID":"8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba","Type":"ContainerStarted","Data":"ef2f7cda88b950be4f5f56f637b429767fa3f8a293114cae323fc029ce8a410e"} Nov 28 11:19:45 crc kubenswrapper[4772]: I1128 11:19:45.884226 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:45 crc kubenswrapper[4772]: I1128 11:19:45.906742 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-c5wb9" podStartSLOduration=5.9962019269999995 podStartE2EDuration="13.906724065s" podCreationTimestamp="2025-11-28 11:19:32 +0000 UTC" firstStartedPulling="2025-11-28 11:19:33.388223311 +0000 UTC m=+771.711466538" lastFinishedPulling="2025-11-28 11:19:41.298745439 +0000 UTC m=+779.621988676" observedRunningTime="2025-11-28 11:19:45.902485205 +0000 UTC m=+784.225728452" watchObservedRunningTime="2025-11-28 11:19:45.906724065 +0000 UTC m=+784.229967292" Nov 28 11:19:48 crc kubenswrapper[4772]: I1128 11:19:48.235581 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:48 crc kubenswrapper[4772]: I1128 11:19:48.306279 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:19:53 crc kubenswrapper[4772]: I1128 11:19:53.860751 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-h5pwr" Nov 28 11:19:53 crc kubenswrapper[4772]: I1128 11:19:53.924344 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:19:53 crc kubenswrapper[4772]: I1128 11:19:53.924415 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:19:54 crc kubenswrapper[4772]: I1128 11:19:54.833172 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-c9tgf" Nov 28 11:19:57 crc kubenswrapper[4772]: I1128 11:19:57.889106 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tw7p5"] Nov 28 11:19:57 crc kubenswrapper[4772]: I1128 11:19:57.890660 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tw7p5" Nov 28 11:19:57 crc kubenswrapper[4772]: I1128 11:19:57.892628 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fm48n" Nov 28 11:19:57 crc kubenswrapper[4772]: I1128 11:19:57.899138 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 28 11:19:57 crc kubenswrapper[4772]: I1128 11:19:57.899574 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 28 11:19:57 crc kubenswrapper[4772]: I1128 11:19:57.916921 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tw7p5"] Nov 28 11:19:58 crc kubenswrapper[4772]: I1128 11:19:58.023042 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c96st\" (UniqueName: \"kubernetes.io/projected/6e9b0e75-2a83-4b0a-ac2f-722fbda3a307-kube-api-access-c96st\") pod \"openstack-operator-index-tw7p5\" (UID: \"6e9b0e75-2a83-4b0a-ac2f-722fbda3a307\") " pod="openstack-operators/openstack-operator-index-tw7p5" Nov 28 11:19:58 crc kubenswrapper[4772]: I1128 11:19:58.125419 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c96st\" (UniqueName: \"kubernetes.io/projected/6e9b0e75-2a83-4b0a-ac2f-722fbda3a307-kube-api-access-c96st\") pod \"openstack-operator-index-tw7p5\" (UID: \"6e9b0e75-2a83-4b0a-ac2f-722fbda3a307\") " pod="openstack-operators/openstack-operator-index-tw7p5" Nov 28 11:19:58 crc kubenswrapper[4772]: I1128 11:19:58.149407 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c96st\" (UniqueName: \"kubernetes.io/projected/6e9b0e75-2a83-4b0a-ac2f-722fbda3a307-kube-api-access-c96st\") pod \"openstack-operator-index-tw7p5\" (UID: \"6e9b0e75-2a83-4b0a-ac2f-722fbda3a307\") " pod="openstack-operators/openstack-operator-index-tw7p5" Nov 28 11:19:58 crc kubenswrapper[4772]: I1128 11:19:58.220573 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tw7p5" Nov 28 11:19:58 crc kubenswrapper[4772]: I1128 11:19:58.657771 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tw7p5"] Nov 28 11:19:58 crc kubenswrapper[4772]: I1128 11:19:58.966684 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tw7p5" event={"ID":"6e9b0e75-2a83-4b0a-ac2f-722fbda3a307","Type":"ContainerStarted","Data":"8c0751301e34475b5fc3bba9dbb25c3e4a66c6870c54a99bb77a69dec887a339"} Nov 28 11:20:01 crc kubenswrapper[4772]: I1128 11:20:01.984941 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tw7p5" event={"ID":"6e9b0e75-2a83-4b0a-ac2f-722fbda3a307","Type":"ContainerStarted","Data":"15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4"} Nov 28 11:20:01 crc kubenswrapper[4772]: I1128 11:20:01.999560 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tw7p5" podStartSLOduration=2.31424777 podStartE2EDuration="4.999542598s" podCreationTimestamp="2025-11-28 11:19:57 +0000 UTC" firstStartedPulling="2025-11-28 11:19:58.679925242 +0000 UTC m=+797.003168469" lastFinishedPulling="2025-11-28 11:20:01.36522008 +0000 UTC m=+799.688463297" observedRunningTime="2025-11-28 11:20:01.99884983 +0000 UTC m=+800.322093057" watchObservedRunningTime="2025-11-28 11:20:01.999542598 +0000 UTC m=+800.322785825" Nov 28 11:20:02 crc kubenswrapper[4772]: I1128 11:20:02.056954 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tw7p5"] Nov 28 11:20:02 crc kubenswrapper[4772]: I1128 11:20:02.666522 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-g4wlz"] Nov 28 11:20:02 crc kubenswrapper[4772]: I1128 11:20:02.668763 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g4wlz" Nov 28 11:20:02 crc kubenswrapper[4772]: I1128 11:20:02.678618 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g4wlz"] Nov 28 11:20:02 crc kubenswrapper[4772]: I1128 11:20:02.792964 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk7st\" (UniqueName: \"kubernetes.io/projected/7cc8caa8-c50f-432d-9d19-8b3f1603d90a-kube-api-access-xk7st\") pod \"openstack-operator-index-g4wlz\" (UID: \"7cc8caa8-c50f-432d-9d19-8b3f1603d90a\") " pod="openstack-operators/openstack-operator-index-g4wlz" Nov 28 11:20:02 crc kubenswrapper[4772]: I1128 11:20:02.894757 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk7st\" (UniqueName: \"kubernetes.io/projected/7cc8caa8-c50f-432d-9d19-8b3f1603d90a-kube-api-access-xk7st\") pod \"openstack-operator-index-g4wlz\" (UID: \"7cc8caa8-c50f-432d-9d19-8b3f1603d90a\") " pod="openstack-operators/openstack-operator-index-g4wlz" Nov 28 11:20:02 crc kubenswrapper[4772]: I1128 11:20:02.913418 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk7st\" (UniqueName: \"kubernetes.io/projected/7cc8caa8-c50f-432d-9d19-8b3f1603d90a-kube-api-access-xk7st\") pod \"openstack-operator-index-g4wlz\" (UID: \"7cc8caa8-c50f-432d-9d19-8b3f1603d90a\") " pod="openstack-operators/openstack-operator-index-g4wlz" Nov 28 11:20:02 crc kubenswrapper[4772]: I1128 11:20:02.999327 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g4wlz" Nov 28 11:20:03 crc kubenswrapper[4772]: I1128 11:20:03.223739 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g4wlz"] Nov 28 11:20:03 crc kubenswrapper[4772]: W1128 11:20:03.226879 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cc8caa8_c50f_432d_9d19_8b3f1603d90a.slice/crio-3d0d2468aa2ef5a95b73f111a12a308e86588c89ff784295c3bd0444616a7b83 WatchSource:0}: Error finding container 3d0d2468aa2ef5a95b73f111a12a308e86588c89ff784295c3bd0444616a7b83: Status 404 returned error can't find the container with id 3d0d2468aa2ef5a95b73f111a12a308e86588c89ff784295c3bd0444616a7b83 Nov 28 11:20:03 crc kubenswrapper[4772]: I1128 11:20:03.240031 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-c5wb9" Nov 28 11:20:04 crc kubenswrapper[4772]: I1128 11:20:04.001011 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-tw7p5" podUID="6e9b0e75-2a83-4b0a-ac2f-722fbda3a307" containerName="registry-server" containerID="cri-o://15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4" gracePeriod=2 Nov 28 11:20:04 crc kubenswrapper[4772]: I1128 11:20:04.011046 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g4wlz" event={"ID":"7cc8caa8-c50f-432d-9d19-8b3f1603d90a","Type":"ContainerStarted","Data":"e5e1bc5fe34a8c2063b2b763f51e76da9c8727eea970eb1df69a4e9b1a716220"} Nov 28 11:20:04 crc kubenswrapper[4772]: I1128 11:20:04.011134 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g4wlz" event={"ID":"7cc8caa8-c50f-432d-9d19-8b3f1603d90a","Type":"ContainerStarted","Data":"3d0d2468aa2ef5a95b73f111a12a308e86588c89ff784295c3bd0444616a7b83"} Nov 28 11:20:04 crc kubenswrapper[4772]: I1128 11:20:04.034266 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-g4wlz" podStartSLOduration=1.9637261750000001 podStartE2EDuration="2.034239026s" podCreationTimestamp="2025-11-28 11:20:02 +0000 UTC" firstStartedPulling="2025-11-28 11:20:03.230263792 +0000 UTC m=+801.553507019" lastFinishedPulling="2025-11-28 11:20:03.300776643 +0000 UTC m=+801.624019870" observedRunningTime="2025-11-28 11:20:04.027974393 +0000 UTC m=+802.351217660" watchObservedRunningTime="2025-11-28 11:20:04.034239026 +0000 UTC m=+802.357482263" Nov 28 11:20:04 crc kubenswrapper[4772]: I1128 11:20:04.429326 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tw7p5" Nov 28 11:20:04 crc kubenswrapper[4772]: I1128 11:20:04.618944 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c96st\" (UniqueName: \"kubernetes.io/projected/6e9b0e75-2a83-4b0a-ac2f-722fbda3a307-kube-api-access-c96st\") pod \"6e9b0e75-2a83-4b0a-ac2f-722fbda3a307\" (UID: \"6e9b0e75-2a83-4b0a-ac2f-722fbda3a307\") " Nov 28 11:20:04 crc kubenswrapper[4772]: I1128 11:20:04.625008 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9b0e75-2a83-4b0a-ac2f-722fbda3a307-kube-api-access-c96st" (OuterVolumeSpecName: "kube-api-access-c96st") pod "6e9b0e75-2a83-4b0a-ac2f-722fbda3a307" (UID: "6e9b0e75-2a83-4b0a-ac2f-722fbda3a307"). InnerVolumeSpecName "kube-api-access-c96st". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:20:04 crc kubenswrapper[4772]: I1128 11:20:04.720065 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c96st\" (UniqueName: \"kubernetes.io/projected/6e9b0e75-2a83-4b0a-ac2f-722fbda3a307-kube-api-access-c96st\") on node \"crc\" DevicePath \"\"" Nov 28 11:20:05 crc kubenswrapper[4772]: I1128 11:20:05.007198 4772 generic.go:334] "Generic (PLEG): container finished" podID="6e9b0e75-2a83-4b0a-ac2f-722fbda3a307" containerID="15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4" exitCode=0 Nov 28 11:20:05 crc kubenswrapper[4772]: I1128 11:20:05.007857 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tw7p5" Nov 28 11:20:05 crc kubenswrapper[4772]: I1128 11:20:05.007952 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tw7p5" event={"ID":"6e9b0e75-2a83-4b0a-ac2f-722fbda3a307","Type":"ContainerDied","Data":"15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4"} Nov 28 11:20:05 crc kubenswrapper[4772]: I1128 11:20:05.007994 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tw7p5" event={"ID":"6e9b0e75-2a83-4b0a-ac2f-722fbda3a307","Type":"ContainerDied","Data":"8c0751301e34475b5fc3bba9dbb25c3e4a66c6870c54a99bb77a69dec887a339"} Nov 28 11:20:05 crc kubenswrapper[4772]: I1128 11:20:05.008022 4772 scope.go:117] "RemoveContainer" containerID="15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4" Nov 28 11:20:05 crc kubenswrapper[4772]: I1128 11:20:05.028179 4772 scope.go:117] "RemoveContainer" containerID="15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4" Nov 28 11:20:05 crc kubenswrapper[4772]: E1128 11:20:05.028591 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4\": container with ID starting with 15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4 not found: ID does not exist" containerID="15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4" Nov 28 11:20:05 crc kubenswrapper[4772]: I1128 11:20:05.028675 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4"} err="failed to get container status \"15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4\": rpc error: code = NotFound desc = could not find container \"15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4\": container with ID starting with 15498363ac5aec215494a94f56480d2b540fb36037e13cccae2b40bc67d93cd4 not found: ID does not exist" Nov 28 11:20:05 crc kubenswrapper[4772]: I1128 11:20:05.041089 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tw7p5"] Nov 28 11:20:05 crc kubenswrapper[4772]: I1128 11:20:05.044876 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-tw7p5"] Nov 28 11:20:06 crc kubenswrapper[4772]: I1128 11:20:06.002225 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9b0e75-2a83-4b0a-ac2f-722fbda3a307" path="/var/lib/kubelet/pods/6e9b0e75-2a83-4b0a-ac2f-722fbda3a307/volumes" Nov 28 11:20:13 crc kubenswrapper[4772]: I1128 11:20:12.999997 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-g4wlz" Nov 28 11:20:13 crc kubenswrapper[4772]: I1128 11:20:13.000733 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-g4wlz" Nov 28 11:20:13 crc kubenswrapper[4772]: I1128 11:20:13.052307 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-g4wlz" Nov 28 11:20:13 crc kubenswrapper[4772]: I1128 11:20:13.103529 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-g4wlz" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.114957 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9"] Nov 28 11:20:18 crc kubenswrapper[4772]: E1128 11:20:18.115692 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9b0e75-2a83-4b0a-ac2f-722fbda3a307" containerName="registry-server" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.115712 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9b0e75-2a83-4b0a-ac2f-722fbda3a307" containerName="registry-server" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.115959 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9b0e75-2a83-4b0a-ac2f-722fbda3a307" containerName="registry-server" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.117929 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.121887 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-scfx6" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.142472 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9"] Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.218682 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565eb734-9f39-468d-97ee-7d119b15d945-util\") pod \"407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9\" (UID: \"565eb734-9f39-468d-97ee-7d119b15d945\") " pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.218803 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6bs5\" (UniqueName: \"kubernetes.io/projected/565eb734-9f39-468d-97ee-7d119b15d945-kube-api-access-q6bs5\") pod \"407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9\" (UID: \"565eb734-9f39-468d-97ee-7d119b15d945\") " pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.218927 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565eb734-9f39-468d-97ee-7d119b15d945-bundle\") pod \"407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9\" (UID: \"565eb734-9f39-468d-97ee-7d119b15d945\") " pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.319994 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6bs5\" (UniqueName: \"kubernetes.io/projected/565eb734-9f39-468d-97ee-7d119b15d945-kube-api-access-q6bs5\") pod \"407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9\" (UID: \"565eb734-9f39-468d-97ee-7d119b15d945\") " pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.320063 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565eb734-9f39-468d-97ee-7d119b15d945-bundle\") pod \"407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9\" (UID: \"565eb734-9f39-468d-97ee-7d119b15d945\") " pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.320152 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565eb734-9f39-468d-97ee-7d119b15d945-util\") pod \"407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9\" (UID: \"565eb734-9f39-468d-97ee-7d119b15d945\") " pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.320803 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565eb734-9f39-468d-97ee-7d119b15d945-util\") pod \"407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9\" (UID: \"565eb734-9f39-468d-97ee-7d119b15d945\") " pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.321278 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565eb734-9f39-468d-97ee-7d119b15d945-bundle\") pod \"407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9\" (UID: \"565eb734-9f39-468d-97ee-7d119b15d945\") " pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.340961 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6bs5\" (UniqueName: \"kubernetes.io/projected/565eb734-9f39-468d-97ee-7d119b15d945-kube-api-access-q6bs5\") pod \"407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9\" (UID: \"565eb734-9f39-468d-97ee-7d119b15d945\") " pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.439476 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:18 crc kubenswrapper[4772]: I1128 11:20:18.891561 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9"] Nov 28 11:20:19 crc kubenswrapper[4772]: I1128 11:20:19.108274 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" event={"ID":"565eb734-9f39-468d-97ee-7d119b15d945","Type":"ContainerStarted","Data":"f4d9cf3bd722bc6c8928d92390ac7f8cd1ad7b3a52d6ad8dbf2cb8798131f89e"} Nov 28 11:20:19 crc kubenswrapper[4772]: I1128 11:20:19.108865 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" event={"ID":"565eb734-9f39-468d-97ee-7d119b15d945","Type":"ContainerStarted","Data":"e04b63bdc85349a90e3d56f3c17a85ea3687b7ead5d72fdedfd372360432c77c"} Nov 28 11:20:20 crc kubenswrapper[4772]: I1128 11:20:20.121793 4772 generic.go:334] "Generic (PLEG): container finished" podID="565eb734-9f39-468d-97ee-7d119b15d945" containerID="f4d9cf3bd722bc6c8928d92390ac7f8cd1ad7b3a52d6ad8dbf2cb8798131f89e" exitCode=0 Nov 28 11:20:20 crc kubenswrapper[4772]: I1128 11:20:20.121847 4772 generic.go:334] "Generic (PLEG): container finished" podID="565eb734-9f39-468d-97ee-7d119b15d945" containerID="9fb4e06aa4229448fe20d6ff187cc1bb27a01e71df8d80131619429693733259" exitCode=0 Nov 28 11:20:20 crc kubenswrapper[4772]: I1128 11:20:20.121888 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" event={"ID":"565eb734-9f39-468d-97ee-7d119b15d945","Type":"ContainerDied","Data":"f4d9cf3bd722bc6c8928d92390ac7f8cd1ad7b3a52d6ad8dbf2cb8798131f89e"} Nov 28 11:20:20 crc kubenswrapper[4772]: I1128 11:20:20.121937 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" event={"ID":"565eb734-9f39-468d-97ee-7d119b15d945","Type":"ContainerDied","Data":"9fb4e06aa4229448fe20d6ff187cc1bb27a01e71df8d80131619429693733259"} Nov 28 11:20:21 crc kubenswrapper[4772]: I1128 11:20:21.137272 4772 generic.go:334] "Generic (PLEG): container finished" podID="565eb734-9f39-468d-97ee-7d119b15d945" containerID="ef9fb6ad900a53134798ed6cf18c8d6d8760365987b1a30de372d778a54d5c2c" exitCode=0 Nov 28 11:20:21 crc kubenswrapper[4772]: I1128 11:20:21.137427 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" event={"ID":"565eb734-9f39-468d-97ee-7d119b15d945","Type":"ContainerDied","Data":"ef9fb6ad900a53134798ed6cf18c8d6d8760365987b1a30de372d778a54d5c2c"} Nov 28 11:20:22 crc kubenswrapper[4772]: I1128 11:20:22.420813 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:22 crc kubenswrapper[4772]: I1128 11:20:22.583756 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6bs5\" (UniqueName: \"kubernetes.io/projected/565eb734-9f39-468d-97ee-7d119b15d945-kube-api-access-q6bs5\") pod \"565eb734-9f39-468d-97ee-7d119b15d945\" (UID: \"565eb734-9f39-468d-97ee-7d119b15d945\") " Nov 28 11:20:22 crc kubenswrapper[4772]: I1128 11:20:22.583838 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565eb734-9f39-468d-97ee-7d119b15d945-bundle\") pod \"565eb734-9f39-468d-97ee-7d119b15d945\" (UID: \"565eb734-9f39-468d-97ee-7d119b15d945\") " Nov 28 11:20:22 crc kubenswrapper[4772]: I1128 11:20:22.583902 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565eb734-9f39-468d-97ee-7d119b15d945-util\") pod \"565eb734-9f39-468d-97ee-7d119b15d945\" (UID: \"565eb734-9f39-468d-97ee-7d119b15d945\") " Nov 28 11:20:22 crc kubenswrapper[4772]: I1128 11:20:22.584827 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565eb734-9f39-468d-97ee-7d119b15d945-bundle" (OuterVolumeSpecName: "bundle") pod "565eb734-9f39-468d-97ee-7d119b15d945" (UID: "565eb734-9f39-468d-97ee-7d119b15d945"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:20:22 crc kubenswrapper[4772]: I1128 11:20:22.592469 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565eb734-9f39-468d-97ee-7d119b15d945-kube-api-access-q6bs5" (OuterVolumeSpecName: "kube-api-access-q6bs5") pod "565eb734-9f39-468d-97ee-7d119b15d945" (UID: "565eb734-9f39-468d-97ee-7d119b15d945"). InnerVolumeSpecName "kube-api-access-q6bs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:20:22 crc kubenswrapper[4772]: I1128 11:20:22.597095 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565eb734-9f39-468d-97ee-7d119b15d945-util" (OuterVolumeSpecName: "util") pod "565eb734-9f39-468d-97ee-7d119b15d945" (UID: "565eb734-9f39-468d-97ee-7d119b15d945"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:20:22 crc kubenswrapper[4772]: I1128 11:20:22.685108 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565eb734-9f39-468d-97ee-7d119b15d945-util\") on node \"crc\" DevicePath \"\"" Nov 28 11:20:22 crc kubenswrapper[4772]: I1128 11:20:22.685399 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6bs5\" (UniqueName: \"kubernetes.io/projected/565eb734-9f39-468d-97ee-7d119b15d945-kube-api-access-q6bs5\") on node \"crc\" DevicePath \"\"" Nov 28 11:20:22 crc kubenswrapper[4772]: I1128 11:20:22.685463 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565eb734-9f39-468d-97ee-7d119b15d945-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:20:23 crc kubenswrapper[4772]: I1128 11:20:23.155460 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" event={"ID":"565eb734-9f39-468d-97ee-7d119b15d945","Type":"ContainerDied","Data":"e04b63bdc85349a90e3d56f3c17a85ea3687b7ead5d72fdedfd372360432c77c"} Nov 28 11:20:23 crc kubenswrapper[4772]: I1128 11:20:23.155737 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e04b63bdc85349a90e3d56f3c17a85ea3687b7ead5d72fdedfd372360432c77c" Nov 28 11:20:23 crc kubenswrapper[4772]: I1128 11:20:23.155577 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9" Nov 28 11:20:23 crc kubenswrapper[4772]: I1128 11:20:23.896876 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:20:23 crc kubenswrapper[4772]: I1128 11:20:23.896954 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:20:30 crc kubenswrapper[4772]: I1128 11:20:30.679265 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk"] Nov 28 11:20:30 crc kubenswrapper[4772]: E1128 11:20:30.679819 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565eb734-9f39-468d-97ee-7d119b15d945" containerName="util" Nov 28 11:20:30 crc kubenswrapper[4772]: I1128 11:20:30.679834 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="565eb734-9f39-468d-97ee-7d119b15d945" containerName="util" Nov 28 11:20:30 crc kubenswrapper[4772]: E1128 11:20:30.679859 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565eb734-9f39-468d-97ee-7d119b15d945" containerName="extract" Nov 28 11:20:30 crc kubenswrapper[4772]: I1128 11:20:30.679865 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="565eb734-9f39-468d-97ee-7d119b15d945" containerName="extract" Nov 28 11:20:30 crc kubenswrapper[4772]: E1128 11:20:30.679876 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565eb734-9f39-468d-97ee-7d119b15d945" containerName="pull" Nov 28 11:20:30 crc kubenswrapper[4772]: I1128 11:20:30.679884 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="565eb734-9f39-468d-97ee-7d119b15d945" containerName="pull" Nov 28 11:20:30 crc kubenswrapper[4772]: I1128 11:20:30.680003 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="565eb734-9f39-468d-97ee-7d119b15d945" containerName="extract" Nov 28 11:20:30 crc kubenswrapper[4772]: I1128 11:20:30.680526 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk" Nov 28 11:20:30 crc kubenswrapper[4772]: I1128 11:20:30.682372 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-lncmj" Nov 28 11:20:30 crc kubenswrapper[4772]: I1128 11:20:30.691931 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6plxt\" (UniqueName: \"kubernetes.io/projected/b4035afc-5f40-4643-8f3c-39a68fe3efa6-kube-api-access-6plxt\") pod \"openstack-operator-controller-operator-7f586794b9-rndfk\" (UID: \"b4035afc-5f40-4643-8f3c-39a68fe3efa6\") " pod="openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk" Nov 28 11:20:30 crc kubenswrapper[4772]: I1128 11:20:30.713312 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk"] Nov 28 11:20:30 crc kubenswrapper[4772]: I1128 11:20:30.792875 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6plxt\" (UniqueName: \"kubernetes.io/projected/b4035afc-5f40-4643-8f3c-39a68fe3efa6-kube-api-access-6plxt\") pod \"openstack-operator-controller-operator-7f586794b9-rndfk\" (UID: \"b4035afc-5f40-4643-8f3c-39a68fe3efa6\") " pod="openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk" Nov 28 11:20:30 crc kubenswrapper[4772]: I1128 11:20:30.812465 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6plxt\" (UniqueName: \"kubernetes.io/projected/b4035afc-5f40-4643-8f3c-39a68fe3efa6-kube-api-access-6plxt\") pod \"openstack-operator-controller-operator-7f586794b9-rndfk\" (UID: \"b4035afc-5f40-4643-8f3c-39a68fe3efa6\") " pod="openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk" Nov 28 11:20:30 crc kubenswrapper[4772]: I1128 11:20:30.997225 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk" Nov 28 11:20:31 crc kubenswrapper[4772]: I1128 11:20:31.204705 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk"] Nov 28 11:20:32 crc kubenswrapper[4772]: I1128 11:20:32.217708 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk" event={"ID":"b4035afc-5f40-4643-8f3c-39a68fe3efa6","Type":"ContainerStarted","Data":"1620444b696f149ee8efe16d14e4a0b8cc78a73b38d4320c7a9158061693b103"} Nov 28 11:20:36 crc kubenswrapper[4772]: I1128 11:20:36.241819 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk" event={"ID":"b4035afc-5f40-4643-8f3c-39a68fe3efa6","Type":"ContainerStarted","Data":"c96f4ff3bbb7e2ccf25566254a38d271bc68fda78cc923e5841a63db931eafad"} Nov 28 11:20:36 crc kubenswrapper[4772]: I1128 11:20:36.242533 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk" Nov 28 11:20:36 crc kubenswrapper[4772]: I1128 11:20:36.271868 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk" podStartSLOduration=1.620032191 podStartE2EDuration="6.271843105s" podCreationTimestamp="2025-11-28 11:20:30 +0000 UTC" firstStartedPulling="2025-11-28 11:20:31.232572086 +0000 UTC m=+829.555815313" lastFinishedPulling="2025-11-28 11:20:35.88438301 +0000 UTC m=+834.207626227" observedRunningTime="2025-11-28 11:20:36.265609442 +0000 UTC m=+834.588852679" watchObservedRunningTime="2025-11-28 11:20:36.271843105 +0000 UTC m=+834.595086352" Nov 28 11:20:51 crc kubenswrapper[4772]: I1128 11:20:51.000788 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7f586794b9-rndfk" Nov 28 11:20:53 crc kubenswrapper[4772]: I1128 11:20:53.896928 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:20:53 crc kubenswrapper[4772]: I1128 11:20:53.897649 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:20:53 crc kubenswrapper[4772]: I1128 11:20:53.897712 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:20:53 crc kubenswrapper[4772]: I1128 11:20:53.898556 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"113f0827970f37075ba4d848729cf75c46d547c1a460d92b4daa91c0fd781747"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:20:53 crc kubenswrapper[4772]: I1128 11:20:53.898649 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://113f0827970f37075ba4d848729cf75c46d547c1a460d92b4daa91c0fd781747" gracePeriod=600 Nov 28 11:20:54 crc kubenswrapper[4772]: I1128 11:20:54.382132 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="113f0827970f37075ba4d848729cf75c46d547c1a460d92b4daa91c0fd781747" exitCode=0 Nov 28 11:20:54 crc kubenswrapper[4772]: I1128 11:20:54.382173 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"113f0827970f37075ba4d848729cf75c46d547c1a460d92b4daa91c0fd781747"} Nov 28 11:20:54 crc kubenswrapper[4772]: I1128 11:20:54.382550 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"146105a3e2de3e49e98dafee8802eaebe7226a811726066f96e02933b7de92a2"} Nov 28 11:20:54 crc kubenswrapper[4772]: I1128 11:20:54.382584 4772 scope.go:117] "RemoveContainer" containerID="719ebb3dbeb04504957f753c3982248f2e3853f40081e16785cf8530808c4dd9" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.565045 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.566483 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.568794 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-c6d2z" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.584005 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.589797 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.590991 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.595190 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-kwb9r"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.596484 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-kwb9r" Nov 28 11:21:09 crc kubenswrapper[4772]: W1128 11:21:09.597328 4772 reflector.go:561] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lx4bk": failed to list *v1.Secret: secrets "cinder-operator-controller-manager-dockercfg-lx4bk" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Nov 28 11:21:09 crc kubenswrapper[4772]: E1128 11:21:09.597403 4772 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"cinder-operator-controller-manager-dockercfg-lx4bk\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cinder-operator-controller-manager-dockercfg-lx4bk\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.601026 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.606152 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-st7dp" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.615599 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.616722 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.621800 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-l77jw" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.639429 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.653662 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-kwb9r"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.676397 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-6r566"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.677336 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-6r566" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.680954 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-z7lsk" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.683346 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.684561 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.685948 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-hfk92" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.689505 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.690569 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.698826 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-6tjz9" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.709343 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.710333 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.712404 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.712612 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5x656" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.714391 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.721442 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-6r566"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.729005 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.734063 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvfm\" (UniqueName: \"kubernetes.io/projected/e98df0ac-d8d5-49fd-a331-509b0736bbb1-kube-api-access-9vvfm\") pod \"cinder-operator-controller-manager-6b7f75547b-zkr58\" (UID: \"e98df0ac-d8d5-49fd-a331-509b0736bbb1\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.734133 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7fnk\" (UniqueName: \"kubernetes.io/projected/5278900b-7407-46c5-b420-c5569e508132-kube-api-access-n7fnk\") pod \"designate-operator-controller-manager-955677c94-kwb9r\" (UID: \"5278900b-7407-46c5-b420-c5569e508132\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-kwb9r" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.734160 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2z9c\" (UniqueName: \"kubernetes.io/projected/7657168c-6a48-435a-92c3-b93970b60d07-kube-api-access-t2z9c\") pod \"glance-operator-controller-manager-589cbd6b5b-7spr8\" (UID: \"7657168c-6a48-435a-92c3-b93970b60d07\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.734207 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pksds\" (UniqueName: \"kubernetes.io/projected/c063126d-a9d6-4a2c-96b4-0b0a42a94fff-kube-api-access-pksds\") pod \"barbican-operator-controller-manager-7b64f4fb85-8n59v\" (UID: \"c063126d-a9d6-4a2c-96b4-0b0a42a94fff\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.759262 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.776748 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.777764 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.782658 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-7wsjt" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.790102 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.791034 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.795089 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-m6lzt" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.802270 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.803217 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.806261 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rqh8v" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.833922 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.835534 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pksds\" (UniqueName: \"kubernetes.io/projected/c063126d-a9d6-4a2c-96b4-0b0a42a94fff-kube-api-access-pksds\") pod \"barbican-operator-controller-manager-7b64f4fb85-8n59v\" (UID: \"c063126d-a9d6-4a2c-96b4-0b0a42a94fff\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.835576 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs9ww\" (UniqueName: \"kubernetes.io/projected/7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb-kube-api-access-qs9ww\") pod \"heat-operator-controller-manager-5b77f656f-6r566\" (UID: \"7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-6r566" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.835704 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6ncn\" (UniqueName: \"kubernetes.io/projected/b0c3e372-422f-46e4-94e3-51ed4b3c0fd0-kube-api-access-b6ncn\") pod \"horizon-operator-controller-manager-5d494799bf-qksc2\" (UID: \"b0c3e372-422f-46e4-94e3-51ed4b3c0fd0\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.835730 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert\") pod \"infra-operator-controller-manager-57548d458d-c9nnk\" (UID: \"90486ac7-ac7e-418a-9f2a-5bf934e996ca\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.835751 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vvfm\" (UniqueName: \"kubernetes.io/projected/e98df0ac-d8d5-49fd-a331-509b0736bbb1-kube-api-access-9vvfm\") pod \"cinder-operator-controller-manager-6b7f75547b-zkr58\" (UID: \"e98df0ac-d8d5-49fd-a331-509b0736bbb1\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.836291 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsrbw\" (UniqueName: \"kubernetes.io/projected/c29a1c46-5112-4d85-8f8f-b494575bd428-kube-api-access-nsrbw\") pod \"ironic-operator-controller-manager-67cb4dc6d4-tj9m8\" (UID: \"c29a1c46-5112-4d85-8f8f-b494575bd428\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.836779 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7fnk\" (UniqueName: \"kubernetes.io/projected/5278900b-7407-46c5-b420-c5569e508132-kube-api-access-n7fnk\") pod \"designate-operator-controller-manager-955677c94-kwb9r\" (UID: \"5278900b-7407-46c5-b420-c5569e508132\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-kwb9r" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.836829 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2z9c\" (UniqueName: \"kubernetes.io/projected/7657168c-6a48-435a-92c3-b93970b60d07-kube-api-access-t2z9c\") pod \"glance-operator-controller-manager-589cbd6b5b-7spr8\" (UID: \"7657168c-6a48-435a-92c3-b93970b60d07\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.836953 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfkrs\" (UniqueName: \"kubernetes.io/projected/90486ac7-ac7e-418a-9f2a-5bf934e996ca-kube-api-access-nfkrs\") pod \"infra-operator-controller-manager-57548d458d-c9nnk\" (UID: \"90486ac7-ac7e-418a-9f2a-5bf934e996ca\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.845138 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.875415 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.881724 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vvfm\" (UniqueName: \"kubernetes.io/projected/e98df0ac-d8d5-49fd-a331-509b0736bbb1-kube-api-access-9vvfm\") pod \"cinder-operator-controller-manager-6b7f75547b-zkr58\" (UID: \"e98df0ac-d8d5-49fd-a331-509b0736bbb1\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.897593 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.915486 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pksds\" (UniqueName: \"kubernetes.io/projected/c063126d-a9d6-4a2c-96b4-0b0a42a94fff-kube-api-access-pksds\") pod \"barbican-operator-controller-manager-7b64f4fb85-8n59v\" (UID: \"c063126d-a9d6-4a2c-96b4-0b0a42a94fff\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.917572 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7fnk\" (UniqueName: \"kubernetes.io/projected/5278900b-7407-46c5-b420-c5569e508132-kube-api-access-n7fnk\") pod \"designate-operator-controller-manager-955677c94-kwb9r\" (UID: \"5278900b-7407-46c5-b420-c5569e508132\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-kwb9r" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.917585 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2z9c\" (UniqueName: \"kubernetes.io/projected/7657168c-6a48-435a-92c3-b93970b60d07-kube-api-access-t2z9c\") pod \"glance-operator-controller-manager-589cbd6b5b-7spr8\" (UID: \"7657168c-6a48-435a-92c3-b93970b60d07\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.920748 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.926614 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.927677 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.929167 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r9lgh" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.929169 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-fgmbk" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.934135 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.937156 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-kwb9r" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.941987 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vktq4\" (UniqueName: \"kubernetes.io/projected/f569792f-b95e-4f7a-b58e-22bd27c56dfd-kube-api-access-vktq4\") pod \"keystone-operator-controller-manager-7b4567c7cf-ftntr\" (UID: \"f569792f-b95e-4f7a-b58e-22bd27c56dfd\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.942021 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd72r\" (UniqueName: \"kubernetes.io/projected/d117b1a7-48be-4cc5-928f-b22d31a16b7f-kube-api-access-bd72r\") pod \"manila-operator-controller-manager-5d499bf58b-2w6xw\" (UID: \"d117b1a7-48be-4cc5-928f-b22d31a16b7f\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.942059 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfkrs\" (UniqueName: \"kubernetes.io/projected/90486ac7-ac7e-418a-9f2a-5bf934e996ca-kube-api-access-nfkrs\") pod \"infra-operator-controller-manager-57548d458d-c9nnk\" (UID: \"90486ac7-ac7e-418a-9f2a-5bf934e996ca\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.942105 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bz9k\" (UniqueName: \"kubernetes.io/projected/2dc36b3d-99ac-4a89-bdc3-309a12cc887e-kube-api-access-8bz9k\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-nfb47\" (UID: \"2dc36b3d-99ac-4a89-bdc3-309a12cc887e\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.942143 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs9ww\" (UniqueName: \"kubernetes.io/projected/7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb-kube-api-access-qs9ww\") pod \"heat-operator-controller-manager-5b77f656f-6r566\" (UID: \"7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-6r566" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.942166 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ncn\" (UniqueName: \"kubernetes.io/projected/b0c3e372-422f-46e4-94e3-51ed4b3c0fd0-kube-api-access-b6ncn\") pod \"horizon-operator-controller-manager-5d494799bf-qksc2\" (UID: \"b0c3e372-422f-46e4-94e3-51ed4b3c0fd0\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.942196 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert\") pod \"infra-operator-controller-manager-57548d458d-c9nnk\" (UID: \"90486ac7-ac7e-418a-9f2a-5bf934e996ca\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.942216 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsrbw\" (UniqueName: \"kubernetes.io/projected/c29a1c46-5112-4d85-8f8f-b494575bd428-kube-api-access-nsrbw\") pod \"ironic-operator-controller-manager-67cb4dc6d4-tj9m8\" (UID: \"c29a1c46-5112-4d85-8f8f-b494575bd428\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8" Nov 28 11:21:09 crc kubenswrapper[4772]: E1128 11:21:09.942929 4772 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 11:21:09 crc kubenswrapper[4772]: E1128 11:21:09.942973 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert podName:90486ac7-ac7e-418a-9f2a-5bf934e996ca nodeName:}" failed. No retries permitted until 2025-11-28 11:21:10.442958125 +0000 UTC m=+868.766201352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert") pod "infra-operator-controller-manager-57548d458d-c9nnk" (UID: "90486ac7-ac7e-418a-9f2a-5bf934e996ca") : secret "infra-operator-webhook-server-cert" not found Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.962712 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.964061 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.964207 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.972249 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-htbp6" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.975050 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsrbw\" (UniqueName: \"kubernetes.io/projected/c29a1c46-5112-4d85-8f8f-b494575bd428-kube-api-access-nsrbw\") pod \"ironic-operator-controller-manager-67cb4dc6d4-tj9m8\" (UID: \"c29a1c46-5112-4d85-8f8f-b494575bd428\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.973649 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.985007 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfkrs\" (UniqueName: \"kubernetes.io/projected/90486ac7-ac7e-418a-9f2a-5bf934e996ca-kube-api-access-nfkrs\") pod \"infra-operator-controller-manager-57548d458d-c9nnk\" (UID: \"90486ac7-ac7e-418a-9f2a-5bf934e996ca\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.987150 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs9ww\" (UniqueName: \"kubernetes.io/projected/7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb-kube-api-access-qs9ww\") pod \"heat-operator-controller-manager-5b77f656f-6r566\" (UID: \"7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-6r566" Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.991950 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf"] Nov 28 11:21:09 crc kubenswrapper[4772]: I1128 11:21:09.993037 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ncn\" (UniqueName: \"kubernetes.io/projected/b0c3e372-422f-46e4-94e3-51ed4b3c0fd0-kube-api-access-b6ncn\") pod \"horizon-operator-controller-manager-5d494799bf-qksc2\" (UID: \"b0c3e372-422f-46e4-94e3-51ed4b3c0fd0\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.001567 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-6r566" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.006620 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.016026 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.017064 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.017719 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.019512 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.020792 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.025682 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.025938 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-gkvbf" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.031164 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.031269 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-cg9g2" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.032172 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.036002 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-h2bqg" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.039528 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.045072 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4zpw\" (UniqueName: \"kubernetes.io/projected/f67d3c6d-0b62-4162-bfeb-24da441f5edc-kube-api-access-w4zpw\") pod \"neutron-operator-controller-manager-6fdcddb789-l8rvz\" (UID: \"f67d3c6d-0b62-4162-bfeb-24da441f5edc\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.045118 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t22wb\" (UniqueName: \"kubernetes.io/projected/a7f0f276-5402-4e33-bd63-f6df7819f966-kube-api-access-t22wb\") pod \"nova-operator-controller-manager-79556f57fc-sc4xf\" (UID: \"a7f0f276-5402-4e33-bd63-f6df7819f966\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.045162 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vktq4\" (UniqueName: \"kubernetes.io/projected/f569792f-b95e-4f7a-b58e-22bd27c56dfd-kube-api-access-vktq4\") pod \"keystone-operator-controller-manager-7b4567c7cf-ftntr\" (UID: \"f569792f-b95e-4f7a-b58e-22bd27c56dfd\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.045183 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd72r\" (UniqueName: \"kubernetes.io/projected/d117b1a7-48be-4cc5-928f-b22d31a16b7f-kube-api-access-bd72r\") pod \"manila-operator-controller-manager-5d499bf58b-2w6xw\" (UID: \"d117b1a7-48be-4cc5-928f-b22d31a16b7f\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.045224 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bz9k\" (UniqueName: \"kubernetes.io/projected/2dc36b3d-99ac-4a89-bdc3-309a12cc887e-kube-api-access-8bz9k\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-nfb47\" (UID: \"2dc36b3d-99ac-4a89-bdc3-309a12cc887e\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.054082 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.063586 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd72r\" (UniqueName: \"kubernetes.io/projected/d117b1a7-48be-4cc5-928f-b22d31a16b7f-kube-api-access-bd72r\") pod \"manila-operator-controller-manager-5d499bf58b-2w6xw\" (UID: \"d117b1a7-48be-4cc5-928f-b22d31a16b7f\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.065244 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.066622 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.068253 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-25qqg" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.072397 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vktq4\" (UniqueName: \"kubernetes.io/projected/f569792f-b95e-4f7a-b58e-22bd27c56dfd-kube-api-access-vktq4\") pod \"keystone-operator-controller-manager-7b4567c7cf-ftntr\" (UID: \"f569792f-b95e-4f7a-b58e-22bd27c56dfd\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.078401 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bz9k\" (UniqueName: \"kubernetes.io/projected/2dc36b3d-99ac-4a89-bdc3-309a12cc887e-kube-api-access-8bz9k\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-nfb47\" (UID: \"2dc36b3d-99ac-4a89-bdc3-309a12cc887e\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.080165 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.088141 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.109089 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.114422 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.116105 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.117679 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.118477 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-nbwdx" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.118737 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.137812 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.139575 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.144375 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.145985 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgmnv\" (UniqueName: \"kubernetes.io/projected/7f63617e-c125-40e3-a273-4180f7d8d45c-kube-api-access-bgmnv\") pod \"octavia-operator-controller-manager-64cdc6ff96-k7z7p\" (UID: \"7f63617e-c125-40e3-a273-4180f7d8d45c\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.146022 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs\" (UID: \"47a38974-64f9-46ba-b4cf-f61c0d3a485e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.146047 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbrf\" (UniqueName: \"kubernetes.io/projected/47a38974-64f9-46ba-b4cf-f61c0d3a485e-kube-api-access-sbbrf\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs\" (UID: \"47a38974-64f9-46ba-b4cf-f61c0d3a485e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.146155 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4zpw\" (UniqueName: \"kubernetes.io/projected/f67d3c6d-0b62-4162-bfeb-24da441f5edc-kube-api-access-w4zpw\") pod \"neutron-operator-controller-manager-6fdcddb789-l8rvz\" (UID: \"f67d3c6d-0b62-4162-bfeb-24da441f5edc\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.146186 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t22wb\" (UniqueName: \"kubernetes.io/projected/a7f0f276-5402-4e33-bd63-f6df7819f966-kube-api-access-t22wb\") pod \"nova-operator-controller-manager-79556f57fc-sc4xf\" (UID: \"a7f0f276-5402-4e33-bd63-f6df7819f966\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.146284 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6cz2\" (UniqueName: \"kubernetes.io/projected/2ce66d6e-19b8-41e7-890d-f17f4be5a920-kube-api-access-d6cz2\") pod \"placement-operator-controller-manager-57988cc5b5-2mc7h\" (UID: \"2ce66d6e-19b8-41e7-890d-f17f4be5a920\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.146317 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f4vm\" (UniqueName: \"kubernetes.io/projected/e955b059-294d-40ab-b4af-6bbf7c5bb2e6-kube-api-access-4f4vm\") pod \"ovn-operator-controller-manager-56897c768d-jpf2b\" (UID: \"e955b059-294d-40ab-b4af-6bbf7c5bb2e6\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.163835 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-s6whk" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.173737 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.180457 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t22wb\" (UniqueName: \"kubernetes.io/projected/a7f0f276-5402-4e33-bd63-f6df7819f966-kube-api-access-t22wb\") pod \"nova-operator-controller-manager-79556f57fc-sc4xf\" (UID: \"a7f0f276-5402-4e33-bd63-f6df7819f966\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.180202 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4zpw\" (UniqueName: \"kubernetes.io/projected/f67d3c6d-0b62-4162-bfeb-24da441f5edc-kube-api-access-w4zpw\") pod \"neutron-operator-controller-manager-6fdcddb789-l8rvz\" (UID: \"f67d3c6d-0b62-4162-bfeb-24da441f5edc\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.187149 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.232049 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.233059 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.236349 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9q5r8" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.250980 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgmnv\" (UniqueName: \"kubernetes.io/projected/7f63617e-c125-40e3-a273-4180f7d8d45c-kube-api-access-bgmnv\") pod \"octavia-operator-controller-manager-64cdc6ff96-k7z7p\" (UID: \"7f63617e-c125-40e3-a273-4180f7d8d45c\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.251024 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs\" (UID: \"47a38974-64f9-46ba-b4cf-f61c0d3a485e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.251059 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbrf\" (UniqueName: \"kubernetes.io/projected/47a38974-64f9-46ba-b4cf-f61c0d3a485e-kube-api-access-sbbrf\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs\" (UID: \"47a38974-64f9-46ba-b4cf-f61c0d3a485e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.251194 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp477\" (UniqueName: \"kubernetes.io/projected/9632aabc-46f3-44f3-b6ff-01923cddd5fa-kube-api-access-dp477\") pod \"telemetry-operator-controller-manager-76cc84c6bb-cxg56\" (UID: \"9632aabc-46f3-44f3-b6ff-01923cddd5fa\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.251238 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jmgs\" (UniqueName: \"kubernetes.io/projected/7b6bce9b-9e9a-414a-aad7-5a8667c9557d-kube-api-access-2jmgs\") pod \"test-operator-controller-manager-5cd6c7f4c8-956cf\" (UID: \"7b6bce9b-9e9a-414a-aad7-5a8667c9557d\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.251277 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rp5r\" (UniqueName: \"kubernetes.io/projected/79791884-38fa-4d4e-ace2-cd02b0df26ab-kube-api-access-5rp5r\") pod \"swift-operator-controller-manager-d77b94747-pqx6v\" (UID: \"79791884-38fa-4d4e-ace2-cd02b0df26ab\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.251349 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6cz2\" (UniqueName: \"kubernetes.io/projected/2ce66d6e-19b8-41e7-890d-f17f4be5a920-kube-api-access-d6cz2\") pod \"placement-operator-controller-manager-57988cc5b5-2mc7h\" (UID: \"2ce66d6e-19b8-41e7-890d-f17f4be5a920\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.271757 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r"] Nov 28 11:21:10 crc kubenswrapper[4772]: E1128 11:21:10.272430 4772 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 11:21:10 crc kubenswrapper[4772]: E1128 11:21:10.272486 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert podName:47a38974-64f9-46ba-b4cf-f61c0d3a485e nodeName:}" failed. No retries permitted until 2025-11-28 11:21:10.7724692 +0000 UTC m=+869.095712427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" (UID: "47a38974-64f9-46ba-b4cf-f61c0d3a485e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.271736 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f4vm\" (UniqueName: \"kubernetes.io/projected/e955b059-294d-40ab-b4af-6bbf7c5bb2e6-kube-api-access-4f4vm\") pod \"ovn-operator-controller-manager-56897c768d-jpf2b\" (UID: \"e955b059-294d-40ab-b4af-6bbf7c5bb2e6\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.308765 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.309630 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.311242 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.311600 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f4vm\" (UniqueName: \"kubernetes.io/projected/e955b059-294d-40ab-b4af-6bbf7c5bb2e6-kube-api-access-4f4vm\") pod \"ovn-operator-controller-manager-56897c768d-jpf2b\" (UID: \"e955b059-294d-40ab-b4af-6bbf7c5bb2e6\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.312936 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbrf\" (UniqueName: \"kubernetes.io/projected/47a38974-64f9-46ba-b4cf-f61c0d3a485e-kube-api-access-sbbrf\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs\" (UID: \"47a38974-64f9-46ba-b4cf-f61c0d3a485e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.313065 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.313303 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2gb5l" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.316823 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6cz2\" (UniqueName: \"kubernetes.io/projected/2ce66d6e-19b8-41e7-890d-f17f4be5a920-kube-api-access-d6cz2\") pod \"placement-operator-controller-manager-57988cc5b5-2mc7h\" (UID: \"2ce66d6e-19b8-41e7-890d-f17f4be5a920\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.322764 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.326443 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgmnv\" (UniqueName: \"kubernetes.io/projected/7f63617e-c125-40e3-a273-4180f7d8d45c-kube-api-access-bgmnv\") pod \"octavia-operator-controller-manager-64cdc6ff96-k7z7p\" (UID: \"7f63617e-c125-40e3-a273-4180f7d8d45c\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.330116 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.344339 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.348727 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.378886 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp477\" (UniqueName: \"kubernetes.io/projected/9632aabc-46f3-44f3-b6ff-01923cddd5fa-kube-api-access-dp477\") pod \"telemetry-operator-controller-manager-76cc84c6bb-cxg56\" (UID: \"9632aabc-46f3-44f3-b6ff-01923cddd5fa\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.378936 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jmgs\" (UniqueName: \"kubernetes.io/projected/7b6bce9b-9e9a-414a-aad7-5a8667c9557d-kube-api-access-2jmgs\") pod \"test-operator-controller-manager-5cd6c7f4c8-956cf\" (UID: \"7b6bce9b-9e9a-414a-aad7-5a8667c9557d\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.378959 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k776h\" (UniqueName: \"kubernetes.io/projected/9d009ed5-21d1-4f1c-b1ec-bef39cf8a265-kube-api-access-k776h\") pod \"watcher-operator-controller-manager-656dcb59d4-cfl4r\" (UID: \"9d009ed5-21d1-4f1c-b1ec-bef39cf8a265\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.378988 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rp5r\" (UniqueName: \"kubernetes.io/projected/79791884-38fa-4d4e-ace2-cd02b0df26ab-kube-api-access-5rp5r\") pod \"swift-operator-controller-manager-d77b94747-pqx6v\" (UID: \"79791884-38fa-4d4e-ace2-cd02b0df26ab\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.404436 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.405286 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.408250 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp477\" (UniqueName: \"kubernetes.io/projected/9632aabc-46f3-44f3-b6ff-01923cddd5fa-kube-api-access-dp477\") pod \"telemetry-operator-controller-manager-76cc84c6bb-cxg56\" (UID: \"9632aabc-46f3-44f3-b6ff-01923cddd5fa\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.410082 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rp5r\" (UniqueName: \"kubernetes.io/projected/79791884-38fa-4d4e-ace2-cd02b0df26ab-kube-api-access-5rp5r\") pod \"swift-operator-controller-manager-d77b94747-pqx6v\" (UID: \"79791884-38fa-4d4e-ace2-cd02b0df26ab\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.410473 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.410853 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-pxwcb" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.411482 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.418274 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jmgs\" (UniqueName: \"kubernetes.io/projected/7b6bce9b-9e9a-414a-aad7-5a8667c9557d-kube-api-access-2jmgs\") pod \"test-operator-controller-manager-5cd6c7f4c8-956cf\" (UID: \"7b6bce9b-9e9a-414a-aad7-5a8667c9557d\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.426054 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.440903 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.448314 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.480786 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k776h\" (UniqueName: \"kubernetes.io/projected/9d009ed5-21d1-4f1c-b1ec-bef39cf8a265-kube-api-access-k776h\") pod \"watcher-operator-controller-manager-656dcb59d4-cfl4r\" (UID: \"9d009ed5-21d1-4f1c-b1ec-bef39cf8a265\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.480949 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.481182 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkmpp\" (UniqueName: \"kubernetes.io/projected/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-kube-api-access-wkmpp\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.481323 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.481485 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert\") pod \"infra-operator-controller-manager-57548d458d-c9nnk\" (UID: \"90486ac7-ac7e-418a-9f2a-5bf934e996ca\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:10 crc kubenswrapper[4772]: E1128 11:21:10.481964 4772 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 11:21:10 crc kubenswrapper[4772]: E1128 11:21:10.482033 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert podName:90486ac7-ac7e-418a-9f2a-5bf934e996ca nodeName:}" failed. No retries permitted until 2025-11-28 11:21:11.482015805 +0000 UTC m=+869.805259032 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert") pod "infra-operator-controller-manager-57548d458d-c9nnk" (UID: "90486ac7-ac7e-418a-9f2a-5bf934e996ca") : secret "infra-operator-webhook-server-cert" not found Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.485672 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.552991 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k776h\" (UniqueName: \"kubernetes.io/projected/9d009ed5-21d1-4f1c-b1ec-bef39cf8a265-kube-api-access-k776h\") pod \"watcher-operator-controller-manager-656dcb59d4-cfl4r\" (UID: \"9d009ed5-21d1-4f1c-b1ec-bef39cf8a265\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.585806 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.586893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m5sm\" (UniqueName: \"kubernetes.io/projected/6e95de97-8ad3-493a-a98f-5541e23ca701-kube-api-access-8m5sm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ldrjz\" (UID: \"6e95de97-8ad3-493a-a98f-5541e23ca701\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.586990 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkmpp\" (UniqueName: \"kubernetes.io/projected/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-kube-api-access-wkmpp\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.587075 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:10 crc kubenswrapper[4772]: E1128 11:21:10.587300 4772 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 11:21:10 crc kubenswrapper[4772]: E1128 11:21:10.587438 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs podName:ed05bf4d-d7d9-40eb-965a-5c866fc76b3c nodeName:}" failed. No retries permitted until 2025-11-28 11:21:11.087422798 +0000 UTC m=+869.410666025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs") pod "openstack-operator-controller-manager-6fbf799579-db6rg" (UID: "ed05bf4d-d7d9-40eb-965a-5c866fc76b3c") : secret "metrics-server-cert" not found Nov 28 11:21:10 crc kubenswrapper[4772]: E1128 11:21:10.587727 4772 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 11:21:10 crc kubenswrapper[4772]: E1128 11:21:10.587812 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs podName:ed05bf4d-d7d9-40eb-965a-5c866fc76b3c nodeName:}" failed. No retries permitted until 2025-11-28 11:21:11.087804248 +0000 UTC m=+869.411047475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs") pod "openstack-operator-controller-manager-6fbf799579-db6rg" (UID: "ed05bf4d-d7d9-40eb-965a-5c866fc76b3c") : secret "webhook-server-cert" not found Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.639899 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkmpp\" (UniqueName: \"kubernetes.io/projected/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-kube-api-access-wkmpp\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.671737 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.689037 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m5sm\" (UniqueName: \"kubernetes.io/projected/6e95de97-8ad3-493a-a98f-5541e23ca701-kube-api-access-8m5sm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ldrjz\" (UID: \"6e95de97-8ad3-493a-a98f-5541e23ca701\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.709111 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m5sm\" (UniqueName: \"kubernetes.io/projected/6e95de97-8ad3-493a-a98f-5541e23ca701-kube-api-access-8m5sm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ldrjz\" (UID: \"6e95de97-8ad3-493a-a98f-5541e23ca701\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.796900 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs\" (UID: \"47a38974-64f9-46ba-b4cf-f61c0d3a485e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:10 crc kubenswrapper[4772]: E1128 11:21:10.797070 4772 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 11:21:10 crc kubenswrapper[4772]: E1128 11:21:10.797126 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert podName:47a38974-64f9-46ba-b4cf-f61c0d3a485e nodeName:}" failed. No retries permitted until 2025-11-28 11:21:11.797111497 +0000 UTC m=+870.120354714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" (UID: "47a38974-64f9-46ba-b4cf-f61c0d3a485e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.921242 4772 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" secret="" err="failed to sync secret cache: timed out waiting for the condition" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.921316 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.964303 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-kwb9r"] Nov 28 11:21:10 crc kubenswrapper[4772]: I1128 11:21:10.972431 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.102860 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.103215 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.103329 4772 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.103429 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs podName:ed05bf4d-d7d9-40eb-965a-5c866fc76b3c nodeName:}" failed. No retries permitted until 2025-11-28 11:21:12.103415292 +0000 UTC m=+870.426658519 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs") pod "openstack-operator-controller-manager-6fbf799579-db6rg" (UID: "ed05bf4d-d7d9-40eb-965a-5c866fc76b3c") : secret "webhook-server-cert" not found Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.103470 4772 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.103517 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs podName:ed05bf4d-d7d9-40eb-965a-5c866fc76b3c nodeName:}" failed. No retries permitted until 2025-11-28 11:21:12.103503524 +0000 UTC m=+870.426746751 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs") pod "openstack-operator-controller-manager-6fbf799579-db6rg" (UID: "ed05bf4d-d7d9-40eb-965a-5c866fc76b3c") : secret "metrics-server-cert" not found Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.177144 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lx4bk" Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.201894 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2"] Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.226323 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-6r566"] Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.243313 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8"] Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.342272 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8"] Nov 28 11:21:11 crc kubenswrapper[4772]: W1128 11:21:11.352619 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc29a1c46_5112_4d85_8f8f_b494575bd428.slice/crio-e84c36e5b5d9217ec9f840a28a21ba7d60724a44022777b3dcc921b616e1b823 WatchSource:0}: Error finding container e84c36e5b5d9217ec9f840a28a21ba7d60724a44022777b3dcc921b616e1b823: Status 404 returned error can't find the container with id e84c36e5b5d9217ec9f840a28a21ba7d60724a44022777b3dcc921b616e1b823 Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.363219 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr"] Nov 28 11:21:11 crc kubenswrapper[4772]: W1128 11:21:11.367078 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf569792f_b95e_4f7a_b58e_22bd27c56dfd.slice/crio-4fe8aafad30d075e7f52cb8e96dec01fc0260c308eba8220e79a5914b5be869a WatchSource:0}: Error finding container 4fe8aafad30d075e7f52cb8e96dec01fc0260c308eba8220e79a5914b5be869a: Status 404 returned error can't find the container with id 4fe8aafad30d075e7f52cb8e96dec01fc0260c308eba8220e79a5914b5be869a Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.368062 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v"] Nov 28 11:21:11 crc kubenswrapper[4772]: W1128 11:21:11.369062 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc063126d_a9d6_4a2c_96b4_0b0a42a94fff.slice/crio-5b346267b1241373742e1c68cb627d914cbc279a45f108474900b670b25cba7d WatchSource:0}: Error finding container 5b346267b1241373742e1c68cb627d914cbc279a45f108474900b670b25cba7d: Status 404 returned error can't find the container with id 5b346267b1241373742e1c68cb627d914cbc279a45f108474900b670b25cba7d Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.438555 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf"] Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.440214 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz"] Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.450396 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47"] Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.456253 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw"] Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.461798 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b"] Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.494637 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr" event={"ID":"f569792f-b95e-4f7a-b58e-22bd27c56dfd","Type":"ContainerStarted","Data":"4fe8aafad30d075e7f52cb8e96dec01fc0260c308eba8220e79a5914b5be869a"} Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.496574 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-kwb9r" event={"ID":"5278900b-7407-46c5-b420-c5569e508132","Type":"ContainerStarted","Data":"0d1dab03c4612e8f2e7e31c68e84383d4f3d9cba48d211d9c17564fb381f6a15"} Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.497921 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-6r566" event={"ID":"7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb","Type":"ContainerStarted","Data":"069f6c3a98a5b8356ebd44e4135ba1392a2c4657b76ad76f449ebfb1edac1267"} Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.499149 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw" event={"ID":"d117b1a7-48be-4cc5-928f-b22d31a16b7f","Type":"ContainerStarted","Data":"5dfb24c2b75979c04b4775e993bb157e0aaac889a27eaeddc35bf14b1c50a619"} Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.501795 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2" event={"ID":"b0c3e372-422f-46e4-94e3-51ed4b3c0fd0","Type":"ContainerStarted","Data":"2b436ea375a5aff98d789375cd79d4cb38e44afa9aca1534c023fc710e19be68"} Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.503164 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz" event={"ID":"f67d3c6d-0b62-4162-bfeb-24da441f5edc","Type":"ContainerStarted","Data":"9c3b57be31a39dc4a0446e8b0afc2f4b655b070ff9d225447be7fc7d087336c5"} Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.506230 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v" event={"ID":"c063126d-a9d6-4a2c-96b4-0b0a42a94fff","Type":"ContainerStarted","Data":"5b346267b1241373742e1c68cb627d914cbc279a45f108474900b670b25cba7d"} Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.507219 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8" event={"ID":"c29a1c46-5112-4d85-8f8f-b494575bd428","Type":"ContainerStarted","Data":"e84c36e5b5d9217ec9f840a28a21ba7d60724a44022777b3dcc921b616e1b823"} Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.511525 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert\") pod \"infra-operator-controller-manager-57548d458d-c9nnk\" (UID: \"90486ac7-ac7e-418a-9f2a-5bf934e996ca\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.511711 4772 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.511768 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert podName:90486ac7-ac7e-418a-9f2a-5bf934e996ca nodeName:}" failed. No retries permitted until 2025-11-28 11:21:13.511751344 +0000 UTC m=+871.834994571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert") pod "infra-operator-controller-manager-57548d458d-c9nnk" (UID: "90486ac7-ac7e-418a-9f2a-5bf934e996ca") : secret "infra-operator-webhook-server-cert" not found Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.512861 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47" event={"ID":"2dc36b3d-99ac-4a89-bdc3-309a12cc887e","Type":"ContainerStarted","Data":"d00effd6a26f26dd419ef89110d1385184be802dd61e8bcde81754fecaa209e3"} Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.514332 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b" event={"ID":"e955b059-294d-40ab-b4af-6bbf7c5bb2e6","Type":"ContainerStarted","Data":"73c60f02ddf11809a82ad4cdc52cc72e0ed340811728574716b83696345e2bcf"} Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.515906 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf" event={"ID":"a7f0f276-5402-4e33-bd63-f6df7819f966","Type":"ContainerStarted","Data":"c611b48d83600c93ed86bb855890e17e90fe11c242c64496977de71e955de277"} Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.517227 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8" event={"ID":"7657168c-6a48-435a-92c3-b93970b60d07","Type":"ContainerStarted","Data":"165f3137280d1dbb387e3898057534207dd89c78215931fcf85d8b13c0ecf0b4"} Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.575286 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r"] Nov 28 11:21:11 crc kubenswrapper[4772]: W1128 11:21:11.582797 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d009ed5_21d1_4f1c_b1ec_bef39cf8a265.slice/crio-6defdb235b5c4af847166701280e181e0d36ca11e3cabc19b94d3b4e8df0e47b WatchSource:0}: Error finding container 6defdb235b5c4af847166701280e181e0d36ca11e3cabc19b94d3b4e8df0e47b: Status 404 returned error can't find the container with id 6defdb235b5c4af847166701280e181e0d36ca11e3cabc19b94d3b4e8df0e47b Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.585614 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k776h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-cfl4r_openstack-operators(9d009ed5-21d1-4f1c-b1ec-bef39cf8a265): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.588457 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k776h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-cfl4r_openstack-operators(9d009ed5-21d1-4f1c-b1ec-bef39cf8a265): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.589524 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" podUID="9d009ed5-21d1-4f1c-b1ec-bef39cf8a265" Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.593722 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf"] Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.596107 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v"] Nov 28 11:21:11 crc kubenswrapper[4772]: W1128 11:21:11.596286 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b6bce9b_9e9a_414a_aad7_5a8667c9557d.slice/crio-f8df7ade9ab7424f3eef8943bcdb4b1c57d255f9434d3cb94cad5d2348cf58eb WatchSource:0}: Error finding container f8df7ade9ab7424f3eef8943bcdb4b1c57d255f9434d3cb94cad5d2348cf58eb: Status 404 returned error can't find the container with id f8df7ade9ab7424f3eef8943bcdb4b1c57d255f9434d3cb94cad5d2348cf58eb Nov 28 11:21:11 crc kubenswrapper[4772]: W1128 11:21:11.603083 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79791884_38fa_4d4e_ace2_cd02b0df26ab.slice/crio-b9cdea80272150313cd4d63d4c706994724e906cf7c8768da68e3c69dee03ea4 WatchSource:0}: Error finding container b9cdea80272150313cd4d63d4c706994724e906cf7c8768da68e3c69dee03ea4: Status 404 returned error can't find the container with id b9cdea80272150313cd4d63d4c706994724e906cf7c8768da68e3c69dee03ea4 Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.603774 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h"] Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.607048 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rp5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d77b94747-pqx6v_openstack-operators(79791884-38fa-4d4e-ace2-cd02b0df26ab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.608734 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56"] Nov 28 11:21:11 crc kubenswrapper[4772]: W1128 11:21:11.610071 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9632aabc_46f3_44f3_b6ff_01923cddd5fa.slice/crio-f10cc9eb460a14e5addcae14d17748b7cb5c449e3c9c8153eb918108fcef42c7 WatchSource:0}: Error finding container f10cc9eb460a14e5addcae14d17748b7cb5c449e3c9c8153eb918108fcef42c7: Status 404 returned error can't find the container with id f10cc9eb460a14e5addcae14d17748b7cb5c449e3c9c8153eb918108fcef42c7 Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.611823 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rp5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d77b94747-pqx6v_openstack-operators(79791884-38fa-4d4e-ace2-cd02b0df26ab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.612047 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2jmgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-956cf_openstack-operators(7b6bce9b-9e9a-414a-aad7-5a8667c9557d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.613138 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" podUID="79791884-38fa-4d4e-ace2-cd02b0df26ab" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.617438 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2jmgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-956cf_openstack-operators(7b6bce9b-9e9a-414a-aad7-5a8667c9557d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.617864 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p"] Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.619102 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" podUID="7b6bce9b-9e9a-414a-aad7-5a8667c9557d" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.619409 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d6cz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57988cc5b5-2mc7h_openstack-operators(2ce66d6e-19b8-41e7-890d-f17f4be5a920): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.621345 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d6cz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57988cc5b5-2mc7h_openstack-operators(2ce66d6e-19b8-41e7-890d-f17f4be5a920): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.622744 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dp477,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-cxg56_openstack-operators(9632aabc-46f3-44f3-b6ff-01923cddd5fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.622833 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" podUID="2ce66d6e-19b8-41e7-890d-f17f4be5a920" Nov 28 11:21:11 crc kubenswrapper[4772]: W1128 11:21:11.624055 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f63617e_c125_40e3_a273_4180f7d8d45c.slice/crio-cb283f928b37b1d02929512211912873ea3153fcf3efac7e9bd3567a21a6dd46 WatchSource:0}: Error finding container cb283f928b37b1d02929512211912873ea3153fcf3efac7e9bd3567a21a6dd46: Status 404 returned error can't find the container with id cb283f928b37b1d02929512211912873ea3153fcf3efac7e9bd3567a21a6dd46 Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.624546 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dp477,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-cxg56_openstack-operators(9632aabc-46f3-44f3-b6ff-01923cddd5fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.625531 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgmnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-64cdc6ff96-k7z7p_openstack-operators(7f63617e-c125-40e3-a273-4180f7d8d45c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.625598 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" podUID="9632aabc-46f3-44f3-b6ff-01923cddd5fa" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.627659 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgmnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-64cdc6ff96-k7z7p_openstack-operators(7f63617e-c125-40e3-a273-4180f7d8d45c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.628945 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" podUID="7f63617e-c125-40e3-a273-4180f7d8d45c" Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.705962 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58"] Nov 28 11:21:11 crc kubenswrapper[4772]: W1128 11:21:11.714651 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode98df0ac_d8d5_49fd_a331_509b0736bbb1.slice/crio-46c7ed8b94de2672672a60823ffed8f353d7fa1bc7196b5200d3e319d65b7b16 WatchSource:0}: Error finding container 46c7ed8b94de2672672a60823ffed8f353d7fa1bc7196b5200d3e319d65b7b16: Status 404 returned error can't find the container with id 46c7ed8b94de2672672a60823ffed8f353d7fa1bc7196b5200d3e319d65b7b16 Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.717731 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz"] Nov 28 11:21:11 crc kubenswrapper[4772]: W1128 11:21:11.725947 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e95de97_8ad3_493a_a98f_5541e23ca701.slice/crio-4be377cf882a86b6d3ae787f1b130e90e32a83768d75f93d132d6139c7e64231 WatchSource:0}: Error finding container 4be377cf882a86b6d3ae787f1b130e90e32a83768d75f93d132d6139c7e64231: Status 404 returned error can't find the container with id 4be377cf882a86b6d3ae787f1b130e90e32a83768d75f93d132d6139c7e64231 Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.727885 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8m5sm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-ldrjz_openstack-operators(6e95de97-8ad3-493a-a98f-5541e23ca701): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.729079 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" podUID="6e95de97-8ad3-493a-a98f-5541e23ca701" Nov 28 11:21:11 crc kubenswrapper[4772]: I1128 11:21:11.814937 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs\" (UID: \"47a38974-64f9-46ba-b4cf-f61c0d3a485e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.815159 4772 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 11:21:11 crc kubenswrapper[4772]: E1128 11:21:11.815260 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert podName:47a38974-64f9-46ba-b4cf-f61c0d3a485e nodeName:}" failed. No retries permitted until 2025-11-28 11:21:13.815236646 +0000 UTC m=+872.138479893 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" (UID: "47a38974-64f9-46ba-b4cf-f61c0d3a485e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 11:21:12 crc kubenswrapper[4772]: I1128 11:21:12.118592 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:12 crc kubenswrapper[4772]: I1128 11:21:12.118771 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:12 crc kubenswrapper[4772]: E1128 11:21:12.118937 4772 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 11:21:12 crc kubenswrapper[4772]: E1128 11:21:12.118996 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs podName:ed05bf4d-d7d9-40eb-965a-5c866fc76b3c nodeName:}" failed. No retries permitted until 2025-11-28 11:21:14.118977324 +0000 UTC m=+872.442220561 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs") pod "openstack-operator-controller-manager-6fbf799579-db6rg" (UID: "ed05bf4d-d7d9-40eb-965a-5c866fc76b3c") : secret "metrics-server-cert" not found Nov 28 11:21:12 crc kubenswrapper[4772]: E1128 11:21:12.119498 4772 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 11:21:12 crc kubenswrapper[4772]: E1128 11:21:12.119803 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs podName:ed05bf4d-d7d9-40eb-965a-5c866fc76b3c nodeName:}" failed. No retries permitted until 2025-11-28 11:21:14.119770395 +0000 UTC m=+872.443013662 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs") pod "openstack-operator-controller-manager-6fbf799579-db6rg" (UID: "ed05bf4d-d7d9-40eb-965a-5c866fc76b3c") : secret "webhook-server-cert" not found Nov 28 11:21:12 crc kubenswrapper[4772]: I1128 11:21:12.524275 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" event={"ID":"6e95de97-8ad3-493a-a98f-5541e23ca701","Type":"ContainerStarted","Data":"4be377cf882a86b6d3ae787f1b130e90e32a83768d75f93d132d6139c7e64231"} Nov 28 11:21:12 crc kubenswrapper[4772]: I1128 11:21:12.525305 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" event={"ID":"9d009ed5-21d1-4f1c-b1ec-bef39cf8a265","Type":"ContainerStarted","Data":"6defdb235b5c4af847166701280e181e0d36ca11e3cabc19b94d3b4e8df0e47b"} Nov 28 11:21:12 crc kubenswrapper[4772]: E1128 11:21:12.526271 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" podUID="6e95de97-8ad3-493a-a98f-5541e23ca701" Nov 28 11:21:12 crc kubenswrapper[4772]: E1128 11:21:12.527080 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" podUID="9d009ed5-21d1-4f1c-b1ec-bef39cf8a265" Nov 28 11:21:12 crc kubenswrapper[4772]: I1128 11:21:12.528395 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" event={"ID":"9632aabc-46f3-44f3-b6ff-01923cddd5fa","Type":"ContainerStarted","Data":"f10cc9eb460a14e5addcae14d17748b7cb5c449e3c9c8153eb918108fcef42c7"} Nov 28 11:21:12 crc kubenswrapper[4772]: E1128 11:21:12.529664 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" podUID="9632aabc-46f3-44f3-b6ff-01923cddd5fa" Nov 28 11:21:12 crc kubenswrapper[4772]: I1128 11:21:12.529793 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" event={"ID":"e98df0ac-d8d5-49fd-a331-509b0736bbb1","Type":"ContainerStarted","Data":"46c7ed8b94de2672672a60823ffed8f353d7fa1bc7196b5200d3e319d65b7b16"} Nov 28 11:21:12 crc kubenswrapper[4772]: I1128 11:21:12.531852 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" event={"ID":"7b6bce9b-9e9a-414a-aad7-5a8667c9557d","Type":"ContainerStarted","Data":"f8df7ade9ab7424f3eef8943bcdb4b1c57d255f9434d3cb94cad5d2348cf58eb"} Nov 28 11:21:12 crc kubenswrapper[4772]: I1128 11:21:12.533183 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" event={"ID":"7f63617e-c125-40e3-a273-4180f7d8d45c","Type":"ContainerStarted","Data":"cb283f928b37b1d02929512211912873ea3153fcf3efac7e9bd3567a21a6dd46"} Nov 28 11:21:12 crc kubenswrapper[4772]: E1128 11:21:12.533746 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" podUID="7b6bce9b-9e9a-414a-aad7-5a8667c9557d" Nov 28 11:21:12 crc kubenswrapper[4772]: I1128 11:21:12.534809 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" event={"ID":"79791884-38fa-4d4e-ace2-cd02b0df26ab","Type":"ContainerStarted","Data":"b9cdea80272150313cd4d63d4c706994724e906cf7c8768da68e3c69dee03ea4"} Nov 28 11:21:12 crc kubenswrapper[4772]: E1128 11:21:12.536765 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" podUID="7f63617e-c125-40e3-a273-4180f7d8d45c" Nov 28 11:21:12 crc kubenswrapper[4772]: E1128 11:21:12.536818 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" podUID="79791884-38fa-4d4e-ace2-cd02b0df26ab" Nov 28 11:21:12 crc kubenswrapper[4772]: I1128 11:21:12.537311 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" event={"ID":"2ce66d6e-19b8-41e7-890d-f17f4be5a920","Type":"ContainerStarted","Data":"b4ab8c6d3b9e554ef0be0e89883b6b888fa6bb0c10a277ae3fb814fff79336c3"} Nov 28 11:21:12 crc kubenswrapper[4772]: E1128 11:21:12.539400 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" podUID="2ce66d6e-19b8-41e7-890d-f17f4be5a920" Nov 28 11:21:13 crc kubenswrapper[4772]: I1128 11:21:13.539404 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert\") pod \"infra-operator-controller-manager-57548d458d-c9nnk\" (UID: \"90486ac7-ac7e-418a-9f2a-5bf934e996ca\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:13 crc kubenswrapper[4772]: E1128 11:21:13.540093 4772 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 11:21:13 crc kubenswrapper[4772]: E1128 11:21:13.540143 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert podName:90486ac7-ac7e-418a-9f2a-5bf934e996ca nodeName:}" failed. No retries permitted until 2025-11-28 11:21:17.540129276 +0000 UTC m=+875.863372493 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert") pod "infra-operator-controller-manager-57548d458d-c9nnk" (UID: "90486ac7-ac7e-418a-9f2a-5bf934e996ca") : secret "infra-operator-webhook-server-cert" not found Nov 28 11:21:13 crc kubenswrapper[4772]: E1128 11:21:13.549673 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" podUID="6e95de97-8ad3-493a-a98f-5541e23ca701" Nov 28 11:21:13 crc kubenswrapper[4772]: E1128 11:21:13.573870 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" podUID="7f63617e-c125-40e3-a273-4180f7d8d45c" Nov 28 11:21:13 crc kubenswrapper[4772]: E1128 11:21:13.574640 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" podUID="9632aabc-46f3-44f3-b6ff-01923cddd5fa" Nov 28 11:21:13 crc kubenswrapper[4772]: E1128 11:21:13.574925 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" podUID="7b6bce9b-9e9a-414a-aad7-5a8667c9557d" Nov 28 11:21:13 crc kubenswrapper[4772]: E1128 11:21:13.575077 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" podUID="79791884-38fa-4d4e-ace2-cd02b0df26ab" Nov 28 11:21:13 crc kubenswrapper[4772]: E1128 11:21:13.575234 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" podUID="9d009ed5-21d1-4f1c-b1ec-bef39cf8a265" Nov 28 11:21:13 crc kubenswrapper[4772]: E1128 11:21:13.575466 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" podUID="2ce66d6e-19b8-41e7-890d-f17f4be5a920" Nov 28 11:21:13 crc kubenswrapper[4772]: I1128 11:21:13.843951 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs\" (UID: \"47a38974-64f9-46ba-b4cf-f61c0d3a485e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:13 crc kubenswrapper[4772]: E1128 11:21:13.844147 4772 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 11:21:13 crc kubenswrapper[4772]: E1128 11:21:13.844217 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert podName:47a38974-64f9-46ba-b4cf-f61c0d3a485e nodeName:}" failed. No retries permitted until 2025-11-28 11:21:17.844195904 +0000 UTC m=+876.167439131 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" (UID: "47a38974-64f9-46ba-b4cf-f61c0d3a485e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 11:21:14 crc kubenswrapper[4772]: I1128 11:21:14.147547 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:14 crc kubenswrapper[4772]: I1128 11:21:14.147745 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:14 crc kubenswrapper[4772]: E1128 11:21:14.147878 4772 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 11:21:14 crc kubenswrapper[4772]: E1128 11:21:14.147963 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs podName:ed05bf4d-d7d9-40eb-965a-5c866fc76b3c nodeName:}" failed. No retries permitted until 2025-11-28 11:21:18.147909532 +0000 UTC m=+876.471152749 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs") pod "openstack-operator-controller-manager-6fbf799579-db6rg" (UID: "ed05bf4d-d7d9-40eb-965a-5c866fc76b3c") : secret "webhook-server-cert" not found Nov 28 11:21:14 crc kubenswrapper[4772]: E1128 11:21:14.150337 4772 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 11:21:14 crc kubenswrapper[4772]: E1128 11:21:14.150414 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs podName:ed05bf4d-d7d9-40eb-965a-5c866fc76b3c nodeName:}" failed. No retries permitted until 2025-11-28 11:21:18.150394326 +0000 UTC m=+876.473637553 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs") pod "openstack-operator-controller-manager-6fbf799579-db6rg" (UID: "ed05bf4d-d7d9-40eb-965a-5c866fc76b3c") : secret "metrics-server-cert" not found Nov 28 11:21:17 crc kubenswrapper[4772]: I1128 11:21:17.602409 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert\") pod \"infra-operator-controller-manager-57548d458d-c9nnk\" (UID: \"90486ac7-ac7e-418a-9f2a-5bf934e996ca\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:17 crc kubenswrapper[4772]: E1128 11:21:17.602592 4772 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 11:21:17 crc kubenswrapper[4772]: E1128 11:21:17.602905 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert podName:90486ac7-ac7e-418a-9f2a-5bf934e996ca nodeName:}" failed. No retries permitted until 2025-11-28 11:21:25.602887069 +0000 UTC m=+883.926130286 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert") pod "infra-operator-controller-manager-57548d458d-c9nnk" (UID: "90486ac7-ac7e-418a-9f2a-5bf934e996ca") : secret "infra-operator-webhook-server-cert" not found Nov 28 11:21:17 crc kubenswrapper[4772]: I1128 11:21:17.914683 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs\" (UID: \"47a38974-64f9-46ba-b4cf-f61c0d3a485e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:17 crc kubenswrapper[4772]: E1128 11:21:17.914947 4772 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 11:21:17 crc kubenswrapper[4772]: E1128 11:21:17.915043 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert podName:47a38974-64f9-46ba-b4cf-f61c0d3a485e nodeName:}" failed. No retries permitted until 2025-11-28 11:21:25.915016954 +0000 UTC m=+884.238260191 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" (UID: "47a38974-64f9-46ba-b4cf-f61c0d3a485e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 11:21:18 crc kubenswrapper[4772]: I1128 11:21:18.218949 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:18 crc kubenswrapper[4772]: I1128 11:21:18.219035 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:18 crc kubenswrapper[4772]: E1128 11:21:18.219190 4772 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 11:21:18 crc kubenswrapper[4772]: E1128 11:21:18.219234 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs podName:ed05bf4d-d7d9-40eb-965a-5c866fc76b3c nodeName:}" failed. No retries permitted until 2025-11-28 11:21:26.219221955 +0000 UTC m=+884.542465182 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs") pod "openstack-operator-controller-manager-6fbf799579-db6rg" (UID: "ed05bf4d-d7d9-40eb-965a-5c866fc76b3c") : secret "metrics-server-cert" not found Nov 28 11:21:18 crc kubenswrapper[4772]: E1128 11:21:18.219585 4772 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 11:21:18 crc kubenswrapper[4772]: E1128 11:21:18.219678 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs podName:ed05bf4d-d7d9-40eb-965a-5c866fc76b3c nodeName:}" failed. No retries permitted until 2025-11-28 11:21:26.219656396 +0000 UTC m=+884.542899703 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs") pod "openstack-operator-controller-manager-6fbf799579-db6rg" (UID: "ed05bf4d-d7d9-40eb-965a-5c866fc76b3c") : secret "webhook-server-cert" not found Nov 28 11:21:24 crc kubenswrapper[4772]: E1128 11:21:24.164635 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9413ed1bc2ae1a6bd28c59b1c7f7e91e1638de7b2a7d4729ed3fa2135182465d" Nov 28 11:21:24 crc kubenswrapper[4772]: E1128 11:21:24.165484 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9413ed1bc2ae1a6bd28c59b1c7f7e91e1638de7b2a7d4729ed3fa2135182465d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b6ncn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5d494799bf-qksc2_openstack-operators(b0c3e372-422f-46e4-94e3-51ed4b3c0fd0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:24 crc kubenswrapper[4772]: E1128 11:21:24.664679 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e00a9ed0ab26c5b745bd804ab1fe6b22428d026f17ea05a05f045e060342f46c" Nov 28 11:21:24 crc kubenswrapper[4772]: E1128 11:21:24.665119 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e00a9ed0ab26c5b745bd804ab1fe6b22428d026f17ea05a05f045e060342f46c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4zpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6fdcddb789-l8rvz_openstack-operators(f67d3c6d-0b62-4162-bfeb-24da441f5edc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:25 crc kubenswrapper[4772]: E1128 11:21:25.341832 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6" Nov 28 11:21:25 crc kubenswrapper[4772]: E1128 11:21:25.342028 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4f4vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-56897c768d-jpf2b_openstack-operators(e955b059-294d-40ab-b4af-6bbf7c5bb2e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:25 crc kubenswrapper[4772]: I1128 11:21:25.616005 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert\") pod \"infra-operator-controller-manager-57548d458d-c9nnk\" (UID: \"90486ac7-ac7e-418a-9f2a-5bf934e996ca\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:25 crc kubenswrapper[4772]: I1128 11:21:25.621388 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90486ac7-ac7e-418a-9f2a-5bf934e996ca-cert\") pod \"infra-operator-controller-manager-57548d458d-c9nnk\" (UID: \"90486ac7-ac7e-418a-9f2a-5bf934e996ca\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:25 crc kubenswrapper[4772]: I1128 11:21:25.648275 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:21:25 crc kubenswrapper[4772]: I1128 11:21:25.919920 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs\" (UID: \"47a38974-64f9-46ba-b4cf-f61c0d3a485e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:25 crc kubenswrapper[4772]: I1128 11:21:25.924987 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a38974-64f9-46ba-b4cf-f61c0d3a485e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs\" (UID: \"47a38974-64f9-46ba-b4cf-f61c0d3a485e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:25 crc kubenswrapper[4772]: I1128 11:21:25.965389 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:26 crc kubenswrapper[4772]: I1128 11:21:26.223462 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:26 crc kubenswrapper[4772]: I1128 11:21:26.223976 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:26 crc kubenswrapper[4772]: I1128 11:21:26.231177 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-metrics-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:26 crc kubenswrapper[4772]: I1128 11:21:26.235452 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed05bf4d-d7d9-40eb-965a-5c866fc76b3c-webhook-certs\") pod \"openstack-operator-controller-manager-6fbf799579-db6rg\" (UID: \"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c\") " pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:26 crc kubenswrapper[4772]: E1128 11:21:26.274481 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:d65dbfc956e9cf376f3c48fc3a0942cb7306b5164f898c40d1efca106df81db7" Nov 28 11:21:26 crc kubenswrapper[4772]: E1128 11:21:26.274696 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:d65dbfc956e9cf376f3c48fc3a0942cb7306b5164f898c40d1efca106df81db7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nsrbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-67cb4dc6d4-tj9m8_openstack-operators(c29a1c46-5112-4d85-8f8f-b494575bd428): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:26 crc kubenswrapper[4772]: I1128 11:21:26.302611 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:26 crc kubenswrapper[4772]: I1128 11:21:26.677810 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr" event={"ID":"f569792f-b95e-4f7a-b58e-22bd27c56dfd","Type":"ContainerStarted","Data":"cb9a323ddcdb28e7a204f1b751bf3cb94ebb34783b218f6c35292029bc9ce43b"} Nov 28 11:21:26 crc kubenswrapper[4772]: I1128 11:21:26.681501 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-6r566" event={"ID":"7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb","Type":"ContainerStarted","Data":"d066ea559bcde89f0feb8fb43905e6631dc6597509a5f6e4253d9b97df383ff1"} Nov 28 11:21:26 crc kubenswrapper[4772]: I1128 11:21:26.683585 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" event={"ID":"e98df0ac-d8d5-49fd-a331-509b0736bbb1","Type":"ContainerStarted","Data":"583bfb4677cfe4234d837086e701eb823799168b3fa22dd3730a4bd19885e4fa"} Nov 28 11:21:26 crc kubenswrapper[4772]: I1128 11:21:26.688484 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf" event={"ID":"a7f0f276-5402-4e33-bd63-f6df7819f966","Type":"ContainerStarted","Data":"3cd23f48c0dd9fbb2dcdc10b99dbe7c0b88a1d4dbc2972b061fbea9c38e3fad9"} Nov 28 11:21:26 crc kubenswrapper[4772]: I1128 11:21:26.814341 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs"] Nov 28 11:21:27 crc kubenswrapper[4772]: I1128 11:21:27.024969 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk"] Nov 28 11:21:27 crc kubenswrapper[4772]: I1128 11:21:27.029669 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg"] Nov 28 11:21:27 crc kubenswrapper[4772]: W1128 11:21:27.453075 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47a38974_64f9_46ba_b4cf_f61c0d3a485e.slice/crio-7a965137bb22c5b681c9b6473c8c65e080a00a23e231726fcca2014319f1c03f WatchSource:0}: Error finding container 7a965137bb22c5b681c9b6473c8c65e080a00a23e231726fcca2014319f1c03f: Status 404 returned error can't find the container with id 7a965137bb22c5b681c9b6473c8c65e080a00a23e231726fcca2014319f1c03f Nov 28 11:21:27 crc kubenswrapper[4772]: W1128 11:21:27.482175 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded05bf4d_d7d9_40eb_965a_5c866fc76b3c.slice/crio-17a577d725eff0d435589676c8ffbe5af09abac3baf628fcc63834e5869ae9d1 WatchSource:0}: Error finding container 17a577d725eff0d435589676c8ffbe5af09abac3baf628fcc63834e5869ae9d1: Status 404 returned error can't find the container with id 17a577d725eff0d435589676c8ffbe5af09abac3baf628fcc63834e5869ae9d1 Nov 28 11:21:27 crc kubenswrapper[4772]: W1128 11:21:27.482869 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90486ac7_ac7e_418a_9f2a_5bf934e996ca.slice/crio-c870cbf5c34daa287e252995d05171dc585d374dce1665435981030b7b3f557f WatchSource:0}: Error finding container c870cbf5c34daa287e252995d05171dc585d374dce1665435981030b7b3f557f: Status 404 returned error can't find the container with id c870cbf5c34daa287e252995d05171dc585d374dce1665435981030b7b3f557f Nov 28 11:21:27 crc kubenswrapper[4772]: I1128 11:21:27.718983 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" event={"ID":"47a38974-64f9-46ba-b4cf-f61c0d3a485e","Type":"ContainerStarted","Data":"7a965137bb22c5b681c9b6473c8c65e080a00a23e231726fcca2014319f1c03f"} Nov 28 11:21:27 crc kubenswrapper[4772]: I1128 11:21:27.721106 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" event={"ID":"90486ac7-ac7e-418a-9f2a-5bf934e996ca","Type":"ContainerStarted","Data":"c870cbf5c34daa287e252995d05171dc585d374dce1665435981030b7b3f557f"} Nov 28 11:21:27 crc kubenswrapper[4772]: I1128 11:21:27.725345 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v" event={"ID":"c063126d-a9d6-4a2c-96b4-0b0a42a94fff","Type":"ContainerStarted","Data":"04c602068ba05ed308fe47409a609ab0b1acb1b54a58adb1228683c6bb227a81"} Nov 28 11:21:27 crc kubenswrapper[4772]: I1128 11:21:27.730927 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47" event={"ID":"2dc36b3d-99ac-4a89-bdc3-309a12cc887e","Type":"ContainerStarted","Data":"5a5f9fbc7ed3fedd53ebb869d28b668c7f9919763d461cb40cfbb302e07cb999"} Nov 28 11:21:27 crc kubenswrapper[4772]: I1128 11:21:27.734084 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" event={"ID":"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c","Type":"ContainerStarted","Data":"17a577d725eff0d435589676c8ffbe5af09abac3baf628fcc63834e5869ae9d1"} Nov 28 11:21:27 crc kubenswrapper[4772]: I1128 11:21:27.744715 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw" event={"ID":"d117b1a7-48be-4cc5-928f-b22d31a16b7f","Type":"ContainerStarted","Data":"b5025b42537f74f1d3bc2615e96beeb052fd057d082babd1b426033104ece5d3"} Nov 28 11:21:28 crc kubenswrapper[4772]: I1128 11:21:28.768118 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8" event={"ID":"7657168c-6a48-435a-92c3-b93970b60d07","Type":"ContainerStarted","Data":"95e7b42feb8906a32d6969450ce0e90a1e5a684396872f42ac5359320be97621"} Nov 28 11:21:28 crc kubenswrapper[4772]: I1128 11:21:28.773142 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-kwb9r" event={"ID":"5278900b-7407-46c5-b420-c5569e508132","Type":"ContainerStarted","Data":"d0b4621baa39c8247f79d18bf9acdc6c015d04b024dba53927f6ef65493b6c63"} Nov 28 11:21:30 crc kubenswrapper[4772]: I1128 11:21:30.786441 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" event={"ID":"ed05bf4d-d7d9-40eb-965a-5c866fc76b3c","Type":"ContainerStarted","Data":"0c3dd122fdbc300a1d2181b010fd75cfa20bb7306e259d2d862ad8a2fc62af86"} Nov 28 11:21:30 crc kubenswrapper[4772]: I1128 11:21:30.787223 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:30 crc kubenswrapper[4772]: I1128 11:21:30.815687 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" podStartSLOduration=20.815671409 podStartE2EDuration="20.815671409s" podCreationTimestamp="2025-11-28 11:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:21:30.808016341 +0000 UTC m=+889.131259568" watchObservedRunningTime="2025-11-28 11:21:30.815671409 +0000 UTC m=+889.138914636" Nov 28 11:21:36 crc kubenswrapper[4772]: I1128 11:21:36.309847 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6fbf799579-db6rg" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.342803 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qm4xd"] Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.345564 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.356881 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qm4xd"] Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.368177 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac39922-73a2-4092-b8f3-eaf6bddb58c1-utilities\") pod \"redhat-marketplace-qm4xd\" (UID: \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\") " pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.368276 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac39922-73a2-4092-b8f3-eaf6bddb58c1-catalog-content\") pod \"redhat-marketplace-qm4xd\" (UID: \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\") " pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.368330 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b6mg\" (UniqueName: \"kubernetes.io/projected/dac39922-73a2-4092-b8f3-eaf6bddb58c1-kube-api-access-9b6mg\") pod \"redhat-marketplace-qm4xd\" (UID: \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\") " pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.470114 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac39922-73a2-4092-b8f3-eaf6bddb58c1-utilities\") pod \"redhat-marketplace-qm4xd\" (UID: \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\") " pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.470187 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac39922-73a2-4092-b8f3-eaf6bddb58c1-catalog-content\") pod \"redhat-marketplace-qm4xd\" (UID: \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\") " pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.470215 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b6mg\" (UniqueName: \"kubernetes.io/projected/dac39922-73a2-4092-b8f3-eaf6bddb58c1-kube-api-access-9b6mg\") pod \"redhat-marketplace-qm4xd\" (UID: \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\") " pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.471013 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac39922-73a2-4092-b8f3-eaf6bddb58c1-utilities\") pod \"redhat-marketplace-qm4xd\" (UID: \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\") " pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.471111 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac39922-73a2-4092-b8f3-eaf6bddb58c1-catalog-content\") pod \"redhat-marketplace-qm4xd\" (UID: \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\") " pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.491582 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b6mg\" (UniqueName: \"kubernetes.io/projected/dac39922-73a2-4092-b8f3-eaf6bddb58c1-kube-api-access-9b6mg\") pod \"redhat-marketplace-qm4xd\" (UID: \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\") " pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.673450 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:47 crc kubenswrapper[4772]: E1128 11:21:47.830114 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7" Nov 28 11:21:47 crc kubenswrapper[4772]: E1128 11:21:47.830286 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nfkrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-57548d458d-c9nnk_openstack-operators(90486ac7-ac7e-418a-9f2a-5bf934e996ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:47 crc kubenswrapper[4772]: E1128 11:21:47.849350 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 28 11:21:47 crc kubenswrapper[4772]: E1128 11:21:47.849521 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9vvfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6b7f75547b-zkr58_openstack-operators(e98df0ac-d8d5-49fd-a331-509b0736bbb1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:47 crc kubenswrapper[4772]: E1128 11:21:47.850821 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" podUID="e98df0ac-d8d5-49fd-a331-509b0736bbb1" Nov 28 11:21:47 crc kubenswrapper[4772]: E1128 11:21:47.855656 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 28 11:21:47 crc kubenswrapper[4772]: E1128 11:21:47.855973 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t22wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-sc4xf_openstack-operators(a7f0f276-5402-4e33-bd63-f6df7819f966): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:47 crc kubenswrapper[4772]: E1128 11:21:47.857205 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf" podUID="a7f0f276-5402-4e33-bd63-f6df7819f966" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.911974 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf" Nov 28 11:21:47 crc kubenswrapper[4772]: I1128 11:21:47.914149 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf" Nov 28 11:21:48 crc kubenswrapper[4772]: E1128 11:21:48.331725 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 28 11:21:48 crc kubenswrapper[4772]: E1128 11:21:48.331957 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8m5sm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-ldrjz_openstack-operators(6e95de97-8ad3-493a-a98f-5541e23ca701): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:48 crc kubenswrapper[4772]: E1128 11:21:48.333113 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" podUID="6e95de97-8ad3-493a-a98f-5541e23ca701" Nov 28 11:21:48 crc kubenswrapper[4772]: E1128 11:21:48.745995 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 28 11:21:48 crc kubenswrapper[4772]: E1128 11:21:48.746178 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bd72r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5d499bf58b-2w6xw_openstack-operators(d117b1a7-48be-4cc5-928f-b22d31a16b7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:48 crc kubenswrapper[4772]: E1128 11:21:48.747350 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw" podUID="d117b1a7-48be-4cc5-928f-b22d31a16b7f" Nov 28 11:21:48 crc kubenswrapper[4772]: E1128 11:21:48.751490 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4" Nov 28 11:21:48 crc kubenswrapper[4772]: E1128 11:21:48.751664 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rp5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d77b94747-pqx6v_openstack-operators(79791884-38fa-4d4e-ace2-cd02b0df26ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:48 crc kubenswrapper[4772]: I1128 11:21:48.916790 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw" Nov 28 11:21:48 crc kubenswrapper[4772]: I1128 11:21:48.918629 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.172398 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.172571 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d6cz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57988cc5b5-2mc7h_openstack-operators(2ce66d6e-19b8-41e7-890d-f17f4be5a920): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.621325 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.621499 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pksds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7b64f4fb85-8n59v_openstack-operators(c063126d-a9d6-4a2c-96b4-0b0a42a94fff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.622670 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v" podUID="c063126d-a9d6-4a2c-96b4-0b0a42a94fff" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.628550 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.628746 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgmnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-64cdc6ff96-k7z7p_openstack-operators(7f63617e-c125-40e3-a273-4180f7d8d45c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.628953 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.629073 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t2z9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-589cbd6b5b-7spr8_openstack-operators(7657168c-6a48-435a-92c3-b93970b60d07): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.630269 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8" podUID="7657168c-6a48-435a-92c3-b93970b60d07" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.660903 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.661037 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n7fnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-955677c94-kwb9r_openstack-operators(5278900b-7407-46c5-b420-c5569e508132): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:21:49 crc kubenswrapper[4772]: E1128 11:21:49.662136 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-955677c94-kwb9r" podUID="5278900b-7407-46c5-b420-c5569e508132" Nov 28 11:21:49 crc kubenswrapper[4772]: I1128 11:21:49.927309 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" event={"ID":"47a38974-64f9-46ba-b4cf-f61c0d3a485e","Type":"ContainerStarted","Data":"5dc1c92281440db4c5437dd48ac7455e46b08525deac262ca831c08ece267d56"} Nov 28 11:21:49 crc kubenswrapper[4772]: I1128 11:21:49.927675 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v" Nov 28 11:21:49 crc kubenswrapper[4772]: I1128 11:21:49.927868 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8" Nov 28 11:21:49 crc kubenswrapper[4772]: I1128 11:21:49.927909 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-955677c94-kwb9r" Nov 28 11:21:49 crc kubenswrapper[4772]: I1128 11:21:49.929656 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-955677c94-kwb9r" Nov 28 11:21:49 crc kubenswrapper[4772]: I1128 11:21:49.929870 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8" Nov 28 11:21:49 crc kubenswrapper[4772]: I1128 11:21:49.929900 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v" Nov 28 11:21:50 crc kubenswrapper[4772]: I1128 11:21:50.072527 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qm4xd"] Nov 28 11:21:50 crc kubenswrapper[4772]: W1128 11:21:50.098329 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddac39922_73a2_4092_b8f3_eaf6bddb58c1.slice/crio-9657f40f8878f7fcaa9c448bc31bdac7aaae2fdaab94b51173c9e79c73ad07dc WatchSource:0}: Error finding container 9657f40f8878f7fcaa9c448bc31bdac7aaae2fdaab94b51173c9e79c73ad07dc: Status 404 returned error can't find the container with id 9657f40f8878f7fcaa9c448bc31bdac7aaae2fdaab94b51173c9e79c73ad07dc Nov 28 11:21:50 crc kubenswrapper[4772]: E1128 11:21:50.224772 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" podUID="7f63617e-c125-40e3-a273-4180f7d8d45c" Nov 28 11:21:50 crc kubenswrapper[4772]: E1128 11:21:50.243033 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" podUID="90486ac7-ac7e-418a-9f2a-5bf934e996ca" Nov 28 11:21:50 crc kubenswrapper[4772]: E1128 11:21:50.395623 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz" podUID="f67d3c6d-0b62-4162-bfeb-24da441f5edc" Nov 28 11:21:50 crc kubenswrapper[4772]: E1128 11:21:50.609388 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8" podUID="c29a1c46-5112-4d85-8f8f-b494575bd428" Nov 28 11:21:50 crc kubenswrapper[4772]: E1128 11:21:50.617731 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" podUID="79791884-38fa-4d4e-ace2-cd02b0df26ab" Nov 28 11:21:50 crc kubenswrapper[4772]: E1128 11:21:50.729585 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" podUID="2ce66d6e-19b8-41e7-890d-f17f4be5a920" Nov 28 11:21:50 crc kubenswrapper[4772]: E1128 11:21:50.755128 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b" podUID="e955b059-294d-40ab-b4af-6bbf7c5bb2e6" Nov 28 11:21:50 crc kubenswrapper[4772]: E1128 11:21:50.831909 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2" podUID="b0c3e372-422f-46e4-94e3-51ed4b3c0fd0" Nov 28 11:21:50 crc kubenswrapper[4772]: I1128 11:21:50.921991 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" Nov 28 11:21:50 crc kubenswrapper[4772]: I1128 11:21:50.924015 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" Nov 28 11:21:50 crc kubenswrapper[4772]: I1128 11:21:50.996277 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf" event={"ID":"a7f0f276-5402-4e33-bd63-f6df7819f966","Type":"ContainerStarted","Data":"ea8947767dbafe4ea0111418dc31200822452005bacf2522d761c1b4c5863350"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.007466 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" event={"ID":"9d009ed5-21d1-4f1c-b1ec-bef39cf8a265","Type":"ContainerStarted","Data":"60f7708dfe831227c525e75257996abbb5476ede786bcbbeaff0ed641b58d88d"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.010055 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" event={"ID":"9632aabc-46f3-44f3-b6ff-01923cddd5fa","Type":"ContainerStarted","Data":"37553929ec93a3be31a9fb67b8a7c320fb503e1326a29673f007d208ebdef77d"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.013170 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" event={"ID":"e98df0ac-d8d5-49fd-a331-509b0736bbb1","Type":"ContainerStarted","Data":"e33db37eaee5e3629c71add014d93fa4e1cf8eda4b8a156e36d858e3caa7b12f"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.022240 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8" event={"ID":"c29a1c46-5112-4d85-8f8f-b494575bd428","Type":"ContainerStarted","Data":"94442bb33da1abbd4c3ca6b928a1b489b920c54ccfe4050040ddcff2b180fe5e"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.024671 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.028331 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-sc4xf" podStartSLOduration=27.143027534 podStartE2EDuration="42.028316851s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.465671603 +0000 UTC m=+869.788914830" lastFinishedPulling="2025-11-28 11:21:26.35096092 +0000 UTC m=+884.674204147" observedRunningTime="2025-11-28 11:21:51.027823279 +0000 UTC m=+909.351066506" watchObservedRunningTime="2025-11-28 11:21:51.028316851 +0000 UTC m=+909.351560078" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.030681 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b" event={"ID":"e955b059-294d-40ab-b4af-6bbf7c5bb2e6","Type":"ContainerStarted","Data":"a2b3cefe32448a874d7d9ee44e809312cfc36743fabd109f7c62410215f77a0f"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.073590 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" event={"ID":"79791884-38fa-4d4e-ace2-cd02b0df26ab","Type":"ContainerStarted","Data":"cc85f143e9f398f077708960bedc30da2e4b2cfb8b091d8e5eec473ffbc45f58"} Nov 28 11:21:51 crc kubenswrapper[4772]: E1128 11:21:51.079410 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4\\\"\"" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" podUID="79791884-38fa-4d4e-ace2-cd02b0df26ab" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.090484 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" event={"ID":"2ce66d6e-19b8-41e7-890d-f17f4be5a920","Type":"ContainerStarted","Data":"c9b70d084612295bca0c045b481886cb23bba3e1f5863e971ef76c72a9b84e6d"} Nov 28 11:21:51 crc kubenswrapper[4772]: E1128 11:21:51.094150 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" podUID="2ce66d6e-19b8-41e7-890d-f17f4be5a920" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.105197 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-zkr58" podStartSLOduration=27.48106359 podStartE2EDuration="42.105178438s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.717416898 +0000 UTC m=+870.040660125" lastFinishedPulling="2025-11-28 11:21:26.341531746 +0000 UTC m=+884.664774973" observedRunningTime="2025-11-28 11:21:51.104537211 +0000 UTC m=+909.427780438" watchObservedRunningTime="2025-11-28 11:21:51.105178438 +0000 UTC m=+909.428421675" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.116559 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v" event={"ID":"c063126d-a9d6-4a2c-96b4-0b0a42a94fff","Type":"ContainerStarted","Data":"3b36e595841841085fb3a100bd159eecef4b5ba1fe1acd7ca6fac14b8d5d48f4"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.157047 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8" event={"ID":"7657168c-6a48-435a-92c3-b93970b60d07","Type":"ContainerStarted","Data":"cf9c619e4d2eb2a7480b0660283bf0c502697c4944f34c1666f2e2c5e21184cd"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.174504 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" event={"ID":"47a38974-64f9-46ba-b4cf-f61c0d3a485e","Type":"ContainerStarted","Data":"0250ebe956610b3035b77c89840b007a40d85181953e9b94aad096af4335291b"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.175420 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.206739 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-6r566" event={"ID":"7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb","Type":"ContainerStarted","Data":"38fd2ce3085045e17da32584d2ad55881cc38791015d339e355ddba25bed3ebf"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.207516 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-6r566" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.210520 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qm4xd" event={"ID":"dac39922-73a2-4092-b8f3-eaf6bddb58c1","Type":"ContainerStarted","Data":"9657f40f8878f7fcaa9c448bc31bdac7aaae2fdaab94b51173c9e79c73ad07dc"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.218176 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2" event={"ID":"b0c3e372-422f-46e4-94e3-51ed4b3c0fd0","Type":"ContainerStarted","Data":"3382db67ee811d1085f610e04b84a43b32fae9c9d4fe6bc8d80d39f70de9699d"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.225644 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-6r566" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.238252 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47" event={"ID":"2dc36b3d-99ac-4a89-bdc3-309a12cc887e","Type":"ContainerStarted","Data":"2d6801c19b7fbd9b6b382d66204960ef09d82515d762f5249f17d36156053da0"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.238817 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.259581 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.273804 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7spr8" podStartSLOduration=27.184415294 podStartE2EDuration="42.273790195s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.251871038 +0000 UTC m=+869.575114265" lastFinishedPulling="2025-11-28 11:21:26.341245929 +0000 UTC m=+884.664489166" observedRunningTime="2025-11-28 11:21:51.226095522 +0000 UTC m=+909.549338749" watchObservedRunningTime="2025-11-28 11:21:51.273790195 +0000 UTC m=+909.597033412" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.273917 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" podStartSLOduration=20.571426758 podStartE2EDuration="42.273913608s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:27.462306588 +0000 UTC m=+885.785549805" lastFinishedPulling="2025-11-28 11:21:49.164793428 +0000 UTC m=+907.488036655" observedRunningTime="2025-11-28 11:21:51.273523508 +0000 UTC m=+909.596766735" watchObservedRunningTime="2025-11-28 11:21:51.273913608 +0000 UTC m=+909.597156835" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.276605 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw" event={"ID":"d117b1a7-48be-4cc5-928f-b22d31a16b7f","Type":"ContainerStarted","Data":"757da2163f669a150a834f1b3b103850b9a8ef955ef1c8dcb2479f23648c6855"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.298931 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8n59v" podStartSLOduration=27.321449075 podStartE2EDuration="42.298914464s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.374865306 +0000 UTC m=+869.698108533" lastFinishedPulling="2025-11-28 11:21:26.352330685 +0000 UTC m=+884.675573922" observedRunningTime="2025-11-28 11:21:51.29798742 +0000 UTC m=+909.621230647" watchObservedRunningTime="2025-11-28 11:21:51.298914464 +0000 UTC m=+909.622157691" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.305771 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr" event={"ID":"f569792f-b95e-4f7a-b58e-22bd27c56dfd","Type":"ContainerStarted","Data":"feffecd13a320c3458b54a1ac3fe26d926452d6584273d70f8eaee3c94c1d24e"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.306703 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.312604 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" event={"ID":"7b6bce9b-9e9a-414a-aad7-5a8667c9557d","Type":"ContainerStarted","Data":"76d8a04d49c785366a9699d51df0de9bde946dc3080dc25515f3e585f578f948"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.313095 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.316195 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" event={"ID":"7f63617e-c125-40e3-a273-4180f7d8d45c","Type":"ContainerStarted","Data":"970f080b6a101d12de2be00304cff8c13efac9640b0e2ab70d6662cd5b9d5b0a"} Nov 28 11:21:51 crc kubenswrapper[4772]: E1128 11:21:51.317609 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" podUID="7f63617e-c125-40e3-a273-4180f7d8d45c" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.317992 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.334040 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz" event={"ID":"f67d3c6d-0b62-4162-bfeb-24da441f5edc","Type":"ContainerStarted","Data":"358af92195833600b8936ccc581390f8d4dd5ac64de39a77ee0f40e05772c123"} Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.335231 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-nfb47" podStartSLOduration=4.091133746 podStartE2EDuration="42.335217192s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.453393006 +0000 UTC m=+869.776636233" lastFinishedPulling="2025-11-28 11:21:49.697476452 +0000 UTC m=+908.020719679" observedRunningTime="2025-11-28 11:21:51.332730478 +0000 UTC m=+909.655973705" watchObservedRunningTime="2025-11-28 11:21:51.335217192 +0000 UTC m=+909.658460419" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.351625 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" event={"ID":"90486ac7-ac7e-418a-9f2a-5bf934e996ca","Type":"ContainerStarted","Data":"99907bfecc32c69cd446b629b40df43947be37cc9e5891100f8b5ff689fa2602"} Nov 28 11:21:51 crc kubenswrapper[4772]: E1128 11:21:51.353641 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7\\\"\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" podUID="90486ac7-ac7e-418a-9f2a-5bf934e996ca" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.408745 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-6r566" podStartSLOduration=4.011825106 podStartE2EDuration="42.408729301s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.243883092 +0000 UTC m=+869.567126319" lastFinishedPulling="2025-11-28 11:21:49.640787287 +0000 UTC m=+907.964030514" observedRunningTime="2025-11-28 11:21:51.402539412 +0000 UTC m=+909.725782639" watchObservedRunningTime="2025-11-28 11:21:51.408729301 +0000 UTC m=+909.731972528" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.446052 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2w6xw" podStartSLOduration=27.536834551 podStartE2EDuration="42.446038656s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.46478197 +0000 UTC m=+869.788025197" lastFinishedPulling="2025-11-28 11:21:26.373986065 +0000 UTC m=+884.697229302" observedRunningTime="2025-11-28 11:21:51.443783037 +0000 UTC m=+909.767026284" watchObservedRunningTime="2025-11-28 11:21:51.446038656 +0000 UTC m=+909.769281883" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.561690 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" podStartSLOduration=3.512266398 podStartE2EDuration="41.561673534s" podCreationTimestamp="2025-11-28 11:21:10 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.611889671 +0000 UTC m=+869.935132898" lastFinishedPulling="2025-11-28 11:21:49.661296807 +0000 UTC m=+907.984540034" observedRunningTime="2025-11-28 11:21:51.56153437 +0000 UTC m=+909.884777587" watchObservedRunningTime="2025-11-28 11:21:51.561673534 +0000 UTC m=+909.884916761" Nov 28 11:21:51 crc kubenswrapper[4772]: I1128 11:21:51.635640 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ftntr" podStartSLOduration=4.277242014 podStartE2EDuration="42.635617004s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.369559389 +0000 UTC m=+869.692802616" lastFinishedPulling="2025-11-28 11:21:49.727934379 +0000 UTC m=+908.051177606" observedRunningTime="2025-11-28 11:21:51.611196763 +0000 UTC m=+909.934440010" watchObservedRunningTime="2025-11-28 11:21:51.635617004 +0000 UTC m=+909.958860231" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.365264 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2" event={"ID":"b0c3e372-422f-46e4-94e3-51ed4b3c0fd0","Type":"ContainerStarted","Data":"0aff7d3701e6c5e462df4f3184ceb2cce73705be3cc8c911cc8b57c63998b5cb"} Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.365427 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.367952 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" event={"ID":"9632aabc-46f3-44f3-b6ff-01923cddd5fa","Type":"ContainerStarted","Data":"70810f128ab236fd92a743d0dce914a1237c27140988fab848cfa76ec7e76543"} Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.368077 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.370807 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" event={"ID":"7b6bce9b-9e9a-414a-aad7-5a8667c9557d","Type":"ContainerStarted","Data":"78ec581e9121fc08056b79e9d441927dd93862b622ce08eccfce96acb5584011"} Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.372857 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz" event={"ID":"f67d3c6d-0b62-4162-bfeb-24da441f5edc","Type":"ContainerStarted","Data":"43d9889174aba0ffe8a888cdcff202c8c53931f24201509e0dbd58a9b93b69cf"} Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.372963 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.374949 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b" event={"ID":"e955b059-294d-40ab-b4af-6bbf7c5bb2e6","Type":"ContainerStarted","Data":"bd6d10803e27cdda8705705a806da8f1e4af3ff4b17dc22b4eb9233533c35529"} Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.375091 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.379010 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-kwb9r" event={"ID":"5278900b-7407-46c5-b420-c5569e508132","Type":"ContainerStarted","Data":"2fbef4b120b60bcf8e06bd1cfe728a0696190b4e65c83b4aef4a454f61d2edd9"} Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.381086 4772 generic.go:334] "Generic (PLEG): container finished" podID="dac39922-73a2-4092-b8f3-eaf6bddb58c1" containerID="45619c6296a6d8a93c11a152c13bcd3593d690387d9515021c7248ce6c29d99b" exitCode=0 Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.381158 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qm4xd" event={"ID":"dac39922-73a2-4092-b8f3-eaf6bddb58c1","Type":"ContainerDied","Data":"45619c6296a6d8a93c11a152c13bcd3593d690387d9515021c7248ce6c29d99b"} Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.383224 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8" event={"ID":"c29a1c46-5112-4d85-8f8f-b494575bd428","Type":"ContainerStarted","Data":"a8db435bafbec5ec90d3ffbe363bfc345472933a084337f48afb54362ad579eb"} Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.383384 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.390751 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" event={"ID":"9d009ed5-21d1-4f1c-b1ec-bef39cf8a265","Type":"ContainerStarted","Data":"e59c7675baa0539221a10225eaef36e2cd6f5d23962acd9b29ed2ebc1f84b5af"} Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.391691 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" Nov 28 11:21:52 crc kubenswrapper[4772]: E1128 11:21:52.409701 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7\\\"\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" podUID="90486ac7-ac7e-418a-9f2a-5bf934e996ca" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.439850 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2" podStartSLOduration=2.695604765 podStartE2EDuration="43.439832926s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.21750865 +0000 UTC m=+869.540751877" lastFinishedPulling="2025-11-28 11:21:51.961736811 +0000 UTC m=+910.284980038" observedRunningTime="2025-11-28 11:21:52.438556813 +0000 UTC m=+910.761800040" watchObservedRunningTime="2025-11-28 11:21:52.439832926 +0000 UTC m=+910.763076153" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.545080 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-955677c94-kwb9r" podStartSLOduration=28.247198327 podStartE2EDuration="43.545060405s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.041909533 +0000 UTC m=+869.365152760" lastFinishedPulling="2025-11-28 11:21:26.339771611 +0000 UTC m=+884.663014838" observedRunningTime="2025-11-28 11:21:52.536792221 +0000 UTC m=+910.860035458" watchObservedRunningTime="2025-11-28 11:21:52.545060405 +0000 UTC m=+910.868303632" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.616603 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" podStartSLOduration=5.560105125 podStartE2EDuration="43.616576683s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.622639829 +0000 UTC m=+869.945883066" lastFinishedPulling="2025-11-28 11:21:49.679111397 +0000 UTC m=+908.002354624" observedRunningTime="2025-11-28 11:21:52.565124993 +0000 UTC m=+910.888368220" watchObservedRunningTime="2025-11-28 11:21:52.616576683 +0000 UTC m=+910.939819900" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.661190 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b" podStartSLOduration=3.558669686 podStartE2EDuration="43.661172245s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.451948268 +0000 UTC m=+869.775191495" lastFinishedPulling="2025-11-28 11:21:51.554450827 +0000 UTC m=+909.877694054" observedRunningTime="2025-11-28 11:21:52.621917831 +0000 UTC m=+910.945161058" watchObservedRunningTime="2025-11-28 11:21:52.661172245 +0000 UTC m=+910.984415472" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.665510 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz" podStartSLOduration=3.304011366 podStartE2EDuration="43.665501177s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.446290392 +0000 UTC m=+869.769533619" lastFinishedPulling="2025-11-28 11:21:51.807780203 +0000 UTC m=+910.131023430" observedRunningTime="2025-11-28 11:21:52.661931405 +0000 UTC m=+910.985174632" watchObservedRunningTime="2025-11-28 11:21:52.665501177 +0000 UTC m=+910.988744404" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.700658 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" podStartSLOduration=5.121757797 podStartE2EDuration="42.700638685s" podCreationTimestamp="2025-11-28 11:21:10 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.585439408 +0000 UTC m=+869.908682635" lastFinishedPulling="2025-11-28 11:21:49.164320296 +0000 UTC m=+907.487563523" observedRunningTime="2025-11-28 11:21:52.692796402 +0000 UTC m=+911.016039649" watchObservedRunningTime="2025-11-28 11:21:52.700638685 +0000 UTC m=+911.023881912" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.720035 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8" podStartSLOduration=3.388357595 podStartE2EDuration="43.720017586s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.354434258 +0000 UTC m=+869.677677485" lastFinishedPulling="2025-11-28 11:21:51.686094249 +0000 UTC m=+910.009337476" observedRunningTime="2025-11-28 11:21:52.716503275 +0000 UTC m=+911.039746512" watchObservedRunningTime="2025-11-28 11:21:52.720017586 +0000 UTC m=+911.043260813" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.888744 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kpgw8"] Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.890439 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:21:52 crc kubenswrapper[4772]: I1128 11:21:52.898865 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kpgw8"] Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.009054 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-catalog-content\") pod \"certified-operators-kpgw8\" (UID: \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\") " pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.009151 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5skzp\" (UniqueName: \"kubernetes.io/projected/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-kube-api-access-5skzp\") pod \"certified-operators-kpgw8\" (UID: \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\") " pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.009190 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-utilities\") pod \"certified-operators-kpgw8\" (UID: \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\") " pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.111288 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-utilities\") pod \"certified-operators-kpgw8\" (UID: \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\") " pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.111412 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-catalog-content\") pod \"certified-operators-kpgw8\" (UID: \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\") " pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.111488 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5skzp\" (UniqueName: \"kubernetes.io/projected/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-kube-api-access-5skzp\") pod \"certified-operators-kpgw8\" (UID: \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\") " pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.111890 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-utilities\") pod \"certified-operators-kpgw8\" (UID: \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\") " pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.112152 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-catalog-content\") pod \"certified-operators-kpgw8\" (UID: \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\") " pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.129843 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5skzp\" (UniqueName: \"kubernetes.io/projected/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-kube-api-access-5skzp\") pod \"certified-operators-kpgw8\" (UID: \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\") " pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.210978 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.402374 4772 generic.go:334] "Generic (PLEG): container finished" podID="dac39922-73a2-4092-b8f3-eaf6bddb58c1" containerID="92383bb0d757027ac750bd62a0cb1ef0053305636c781b58c765743d348e2eb3" exitCode=0 Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.402468 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qm4xd" event={"ID":"dac39922-73a2-4092-b8f3-eaf6bddb58c1","Type":"ContainerDied","Data":"92383bb0d757027ac750bd62a0cb1ef0053305636c781b58c765743d348e2eb3"} Nov 28 11:21:53 crc kubenswrapper[4772]: I1128 11:21:53.572748 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kpgw8"] Nov 28 11:21:54 crc kubenswrapper[4772]: I1128 11:21:54.410330 4772 generic.go:334] "Generic (PLEG): container finished" podID="655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" containerID="14f3f0c8458d40e1d6623419c5cef6158374249a8bc8d35828d0ae3dc98cc900" exitCode=0 Nov 28 11:21:54 crc kubenswrapper[4772]: I1128 11:21:54.410406 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpgw8" event={"ID":"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb","Type":"ContainerDied","Data":"14f3f0c8458d40e1d6623419c5cef6158374249a8bc8d35828d0ae3dc98cc900"} Nov 28 11:21:54 crc kubenswrapper[4772]: I1128 11:21:54.410790 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpgw8" event={"ID":"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb","Type":"ContainerStarted","Data":"676655dec8a840a479d032a94ab4ccb105abf99f1a2a292ac87aec4cb0cc86ab"} Nov 28 11:21:54 crc kubenswrapper[4772]: I1128 11:21:54.418447 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qm4xd" event={"ID":"dac39922-73a2-4092-b8f3-eaf6bddb58c1","Type":"ContainerStarted","Data":"04b14e307a2086bd9fc033bb29844134afad6c861602b9b5306c02f66a32dae7"} Nov 28 11:21:54 crc kubenswrapper[4772]: I1128 11:21:54.463838 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qm4xd" podStartSLOduration=4.55223206 podStartE2EDuration="7.463809085s" podCreationTimestamp="2025-11-28 11:21:47 +0000 UTC" firstStartedPulling="2025-11-28 11:21:51.211843774 +0000 UTC m=+909.535087001" lastFinishedPulling="2025-11-28 11:21:54.123420799 +0000 UTC m=+912.446664026" observedRunningTime="2025-11-28 11:21:54.456428244 +0000 UTC m=+912.779671471" watchObservedRunningTime="2025-11-28 11:21:54.463809085 +0000 UTC m=+912.787052322" Nov 28 11:21:55 crc kubenswrapper[4772]: I1128 11:21:55.978494 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs" Nov 28 11:21:56 crc kubenswrapper[4772]: I1128 11:21:56.435619 4772 generic.go:334] "Generic (PLEG): container finished" podID="655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" containerID="7e16a94e9e0bcc5abcf2df7bdd08fa6e8ab5ce3d21312cd6cf0f81b0e6e8cc57" exitCode=0 Nov 28 11:21:56 crc kubenswrapper[4772]: I1128 11:21:56.435724 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpgw8" event={"ID":"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb","Type":"ContainerDied","Data":"7e16a94e9e0bcc5abcf2df7bdd08fa6e8ab5ce3d21312cd6cf0f81b0e6e8cc57"} Nov 28 11:21:57 crc kubenswrapper[4772]: I1128 11:21:57.674013 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:57 crc kubenswrapper[4772]: I1128 11:21:57.674279 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:57 crc kubenswrapper[4772]: I1128 11:21:57.731185 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:58 crc kubenswrapper[4772]: I1128 11:21:58.461116 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpgw8" event={"ID":"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb","Type":"ContainerStarted","Data":"90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee"} Nov 28 11:21:58 crc kubenswrapper[4772]: I1128 11:21:58.486354 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kpgw8" podStartSLOduration=2.654105183 podStartE2EDuration="6.486329247s" podCreationTimestamp="2025-11-28 11:21:52 +0000 UTC" firstStartedPulling="2025-11-28 11:21:54.412483239 +0000 UTC m=+912.735726476" lastFinishedPulling="2025-11-28 11:21:58.244707313 +0000 UTC m=+916.567950540" observedRunningTime="2025-11-28 11:21:58.485912616 +0000 UTC m=+916.809155853" watchObservedRunningTime="2025-11-28 11:21:58.486329247 +0000 UTC m=+916.809572484" Nov 28 11:21:58 crc kubenswrapper[4772]: E1128 11:21:58.996874 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" podUID="6e95de97-8ad3-493a-a98f-5541e23ca701" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.069455 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jtxxv"] Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.071121 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.087475 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jtxxv"] Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.207658 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4eefb75-1856-4ff1-92d0-f1c982983912-catalog-content\") pod \"community-operators-jtxxv\" (UID: \"e4eefb75-1856-4ff1-92d0-f1c982983912\") " pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.207723 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4eefb75-1856-4ff1-92d0-f1c982983912-utilities\") pod \"community-operators-jtxxv\" (UID: \"e4eefb75-1856-4ff1-92d0-f1c982983912\") " pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.207818 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz9g2\" (UniqueName: \"kubernetes.io/projected/e4eefb75-1856-4ff1-92d0-f1c982983912-kube-api-access-qz9g2\") pod \"community-operators-jtxxv\" (UID: \"e4eefb75-1856-4ff1-92d0-f1c982983912\") " pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.309627 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz9g2\" (UniqueName: \"kubernetes.io/projected/e4eefb75-1856-4ff1-92d0-f1c982983912-kube-api-access-qz9g2\") pod \"community-operators-jtxxv\" (UID: \"e4eefb75-1856-4ff1-92d0-f1c982983912\") " pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.309771 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4eefb75-1856-4ff1-92d0-f1c982983912-catalog-content\") pod \"community-operators-jtxxv\" (UID: \"e4eefb75-1856-4ff1-92d0-f1c982983912\") " pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.309828 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4eefb75-1856-4ff1-92d0-f1c982983912-utilities\") pod \"community-operators-jtxxv\" (UID: \"e4eefb75-1856-4ff1-92d0-f1c982983912\") " pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.310561 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4eefb75-1856-4ff1-92d0-f1c982983912-utilities\") pod \"community-operators-jtxxv\" (UID: \"e4eefb75-1856-4ff1-92d0-f1c982983912\") " pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.310571 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4eefb75-1856-4ff1-92d0-f1c982983912-catalog-content\") pod \"community-operators-jtxxv\" (UID: \"e4eefb75-1856-4ff1-92d0-f1c982983912\") " pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.338376 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz9g2\" (UniqueName: \"kubernetes.io/projected/e4eefb75-1856-4ff1-92d0-f1c982983912-kube-api-access-qz9g2\") pod \"community-operators-jtxxv\" (UID: \"e4eefb75-1856-4ff1-92d0-f1c982983912\") " pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.403911 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.526917 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:21:59 crc kubenswrapper[4772]: I1128 11:21:59.738458 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jtxxv"] Nov 28 11:21:59 crc kubenswrapper[4772]: W1128 11:21:59.745576 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4eefb75_1856_4ff1_92d0_f1c982983912.slice/crio-c6c3509c936d01d50e935f4038d44d111252072a210419431c21e7f2b43ebc4f WatchSource:0}: Error finding container c6c3509c936d01d50e935f4038d44d111252072a210419431c21e7f2b43ebc4f: Status 404 returned error can't find the container with id c6c3509c936d01d50e935f4038d44d111252072a210419431c21e7f2b43ebc4f Nov 28 11:22:00 crc kubenswrapper[4772]: I1128 11:22:00.010443 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qksc2" Nov 28 11:22:00 crc kubenswrapper[4772]: I1128 11:22:00.019835 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-tj9m8" Nov 28 11:22:00 crc kubenswrapper[4772]: I1128 11:22:00.338822 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-l8rvz" Nov 28 11:22:00 crc kubenswrapper[4772]: I1128 11:22:00.429601 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-jpf2b" Nov 28 11:22:00 crc kubenswrapper[4772]: I1128 11:22:00.458910 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-cxg56" Nov 28 11:22:00 crc kubenswrapper[4772]: I1128 11:22:00.482431 4772 generic.go:334] "Generic (PLEG): container finished" podID="e4eefb75-1856-4ff1-92d0-f1c982983912" containerID="1ad9a10670e0298bf09301efd7886e049373c6db2519565de6f1a2d6c4183547" exitCode=0 Nov 28 11:22:00 crc kubenswrapper[4772]: I1128 11:22:00.482902 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtxxv" event={"ID":"e4eefb75-1856-4ff1-92d0-f1c982983912","Type":"ContainerDied","Data":"1ad9a10670e0298bf09301efd7886e049373c6db2519565de6f1a2d6c4183547"} Nov 28 11:22:00 crc kubenswrapper[4772]: I1128 11:22:00.483047 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtxxv" event={"ID":"e4eefb75-1856-4ff1-92d0-f1c982983912","Type":"ContainerStarted","Data":"c6c3509c936d01d50e935f4038d44d111252072a210419431c21e7f2b43ebc4f"} Nov 28 11:22:00 crc kubenswrapper[4772]: I1128 11:22:00.493463 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-956cf" Nov 28 11:22:00 crc kubenswrapper[4772]: I1128 11:22:00.675236 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-cfl4r" Nov 28 11:22:01 crc kubenswrapper[4772]: I1128 11:22:01.456589 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qm4xd"] Nov 28 11:22:01 crc kubenswrapper[4772]: I1128 11:22:01.489693 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qm4xd" podUID="dac39922-73a2-4092-b8f3-eaf6bddb58c1" containerName="registry-server" containerID="cri-o://04b14e307a2086bd9fc033bb29844134afad6c861602b9b5306c02f66a32dae7" gracePeriod=2 Nov 28 11:22:02 crc kubenswrapper[4772]: E1128 11:22:02.002131 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4\\\"\"" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" podUID="79791884-38fa-4d4e-ace2-cd02b0df26ab" Nov 28 11:22:02 crc kubenswrapper[4772]: I1128 11:22:02.499399 4772 generic.go:334] "Generic (PLEG): container finished" podID="dac39922-73a2-4092-b8f3-eaf6bddb58c1" containerID="04b14e307a2086bd9fc033bb29844134afad6c861602b9b5306c02f66a32dae7" exitCode=0 Nov 28 11:22:02 crc kubenswrapper[4772]: I1128 11:22:02.499444 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qm4xd" event={"ID":"dac39922-73a2-4092-b8f3-eaf6bddb58c1","Type":"ContainerDied","Data":"04b14e307a2086bd9fc033bb29844134afad6c861602b9b5306c02f66a32dae7"} Nov 28 11:22:03 crc kubenswrapper[4772]: I1128 11:22:03.211188 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:22:03 crc kubenswrapper[4772]: I1128 11:22:03.211651 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:22:03 crc kubenswrapper[4772]: I1128 11:22:03.255179 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:22:03 crc kubenswrapper[4772]: I1128 11:22:03.561287 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.285765 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.381922 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac39922-73a2-4092-b8f3-eaf6bddb58c1-utilities\") pod \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\" (UID: \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\") " Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.382070 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b6mg\" (UniqueName: \"kubernetes.io/projected/dac39922-73a2-4092-b8f3-eaf6bddb58c1-kube-api-access-9b6mg\") pod \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\" (UID: \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\") " Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.382179 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac39922-73a2-4092-b8f3-eaf6bddb58c1-catalog-content\") pod \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\" (UID: \"dac39922-73a2-4092-b8f3-eaf6bddb58c1\") " Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.382863 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dac39922-73a2-4092-b8f3-eaf6bddb58c1-utilities" (OuterVolumeSpecName: "utilities") pod "dac39922-73a2-4092-b8f3-eaf6bddb58c1" (UID: "dac39922-73a2-4092-b8f3-eaf6bddb58c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.388553 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac39922-73a2-4092-b8f3-eaf6bddb58c1-kube-api-access-9b6mg" (OuterVolumeSpecName: "kube-api-access-9b6mg") pod "dac39922-73a2-4092-b8f3-eaf6bddb58c1" (UID: "dac39922-73a2-4092-b8f3-eaf6bddb58c1"). InnerVolumeSpecName "kube-api-access-9b6mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.398695 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dac39922-73a2-4092-b8f3-eaf6bddb58c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dac39922-73a2-4092-b8f3-eaf6bddb58c1" (UID: "dac39922-73a2-4092-b8f3-eaf6bddb58c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.483277 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b6mg\" (UniqueName: \"kubernetes.io/projected/dac39922-73a2-4092-b8f3-eaf6bddb58c1-kube-api-access-9b6mg\") on node \"crc\" DevicePath \"\"" Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.483308 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac39922-73a2-4092-b8f3-eaf6bddb58c1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.483318 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac39922-73a2-4092-b8f3-eaf6bddb58c1-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.523451 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qm4xd" Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.523511 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qm4xd" event={"ID":"dac39922-73a2-4092-b8f3-eaf6bddb58c1","Type":"ContainerDied","Data":"9657f40f8878f7fcaa9c448bc31bdac7aaae2fdaab94b51173c9e79c73ad07dc"} Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.523604 4772 scope.go:117] "RemoveContainer" containerID="04b14e307a2086bd9fc033bb29844134afad6c861602b9b5306c02f66a32dae7" Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.540743 4772 generic.go:334] "Generic (PLEG): container finished" podID="e4eefb75-1856-4ff1-92d0-f1c982983912" containerID="b2ece7502d09d8cd45fa3fd17489168b5ed85cdc73f71789b448e2a3ad100c98" exitCode=0 Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.542960 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtxxv" event={"ID":"e4eefb75-1856-4ff1-92d0-f1c982983912","Type":"ContainerDied","Data":"b2ece7502d09d8cd45fa3fd17489168b5ed85cdc73f71789b448e2a3ad100c98"} Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.562141 4772 scope.go:117] "RemoveContainer" containerID="92383bb0d757027ac750bd62a0cb1ef0053305636c781b58c765743d348e2eb3" Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.594494 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qm4xd"] Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.595102 4772 scope.go:117] "RemoveContainer" containerID="45619c6296a6d8a93c11a152c13bcd3593d690387d9515021c7248ce6c29d99b" Nov 28 11:22:04 crc kubenswrapper[4772]: I1128 11:22:04.599334 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qm4xd"] Nov 28 11:22:05 crc kubenswrapper[4772]: I1128 11:22:05.551991 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtxxv" event={"ID":"e4eefb75-1856-4ff1-92d0-f1c982983912","Type":"ContainerStarted","Data":"8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a"} Nov 28 11:22:05 crc kubenswrapper[4772]: I1128 11:22:05.574668 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jtxxv" podStartSLOduration=1.9854531039999999 podStartE2EDuration="6.57464682s" podCreationTimestamp="2025-11-28 11:21:59 +0000 UTC" firstStartedPulling="2025-11-28 11:22:00.484493019 +0000 UTC m=+918.807736246" lastFinishedPulling="2025-11-28 11:22:05.073686735 +0000 UTC m=+923.396929962" observedRunningTime="2025-11-28 11:22:05.573482829 +0000 UTC m=+923.896726066" watchObservedRunningTime="2025-11-28 11:22:05.57464682 +0000 UTC m=+923.897890047" Nov 28 11:22:05 crc kubenswrapper[4772]: I1128 11:22:05.859141 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kpgw8"] Nov 28 11:22:05 crc kubenswrapper[4772]: I1128 11:22:05.859387 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kpgw8" podUID="655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" containerName="registry-server" containerID="cri-o://90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee" gracePeriod=2 Nov 28 11:22:05 crc kubenswrapper[4772]: E1128 11:22:05.995335 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" podUID="7f63617e-c125-40e3-a273-4180f7d8d45c" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.003259 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dac39922-73a2-4092-b8f3-eaf6bddb58c1" path="/var/lib/kubelet/pods/dac39922-73a2-4092-b8f3-eaf6bddb58c1/volumes" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.285243 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.414946 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-utilities\") pod \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\" (UID: \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\") " Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.415027 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-catalog-content\") pod \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\" (UID: \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\") " Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.415127 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5skzp\" (UniqueName: \"kubernetes.io/projected/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-kube-api-access-5skzp\") pod \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\" (UID: \"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb\") " Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.416492 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-utilities" (OuterVolumeSpecName: "utilities") pod "655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" (UID: "655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.434949 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-kube-api-access-5skzp" (OuterVolumeSpecName: "kube-api-access-5skzp") pod "655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" (UID: "655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb"). InnerVolumeSpecName "kube-api-access-5skzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.467012 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" (UID: "655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.516505 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5skzp\" (UniqueName: \"kubernetes.io/projected/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-kube-api-access-5skzp\") on node \"crc\" DevicePath \"\"" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.516552 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.516562 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.564332 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" event={"ID":"90486ac7-ac7e-418a-9f2a-5bf934e996ca","Type":"ContainerStarted","Data":"35f37b22047aac22f53bd0146bc55f42bba07fec1e65909fd406b67ad80512bd"} Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.564729 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.567559 4772 generic.go:334] "Generic (PLEG): container finished" podID="655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" containerID="90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee" exitCode=0 Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.567600 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpgw8" event={"ID":"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb","Type":"ContainerDied","Data":"90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee"} Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.567646 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpgw8" event={"ID":"655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb","Type":"ContainerDied","Data":"676655dec8a840a479d032a94ab4ccb105abf99f1a2a292ac87aec4cb0cc86ab"} Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.567670 4772 scope.go:117] "RemoveContainer" containerID="90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.567720 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpgw8" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.590420 4772 scope.go:117] "RemoveContainer" containerID="7e16a94e9e0bcc5abcf2df7bdd08fa6e8ab5ce3d21312cd6cf0f81b0e6e8cc57" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.593603 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" podStartSLOduration=19.607337866 podStartE2EDuration="57.593585919s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:27.490681491 +0000 UTC m=+885.813924718" lastFinishedPulling="2025-11-28 11:22:05.476929534 +0000 UTC m=+923.800172771" observedRunningTime="2025-11-28 11:22:06.586600629 +0000 UTC m=+924.909843856" watchObservedRunningTime="2025-11-28 11:22:06.593585919 +0000 UTC m=+924.916829146" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.611966 4772 scope.go:117] "RemoveContainer" containerID="14f3f0c8458d40e1d6623419c5cef6158374249a8bc8d35828d0ae3dc98cc900" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.630429 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kpgw8"] Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.639426 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kpgw8"] Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.644302 4772 scope.go:117] "RemoveContainer" containerID="90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee" Nov 28 11:22:06 crc kubenswrapper[4772]: E1128 11:22:06.645011 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee\": container with ID starting with 90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee not found: ID does not exist" containerID="90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.645071 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee"} err="failed to get container status \"90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee\": rpc error: code = NotFound desc = could not find container \"90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee\": container with ID starting with 90b46d7f4c523cd5183aeb8b5946aa8647e508cc5a6d69bed7d02171f8cf0dee not found: ID does not exist" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.645117 4772 scope.go:117] "RemoveContainer" containerID="7e16a94e9e0bcc5abcf2df7bdd08fa6e8ab5ce3d21312cd6cf0f81b0e6e8cc57" Nov 28 11:22:06 crc kubenswrapper[4772]: E1128 11:22:06.645852 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e16a94e9e0bcc5abcf2df7bdd08fa6e8ab5ce3d21312cd6cf0f81b0e6e8cc57\": container with ID starting with 7e16a94e9e0bcc5abcf2df7bdd08fa6e8ab5ce3d21312cd6cf0f81b0e6e8cc57 not found: ID does not exist" containerID="7e16a94e9e0bcc5abcf2df7bdd08fa6e8ab5ce3d21312cd6cf0f81b0e6e8cc57" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.645913 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e16a94e9e0bcc5abcf2df7bdd08fa6e8ab5ce3d21312cd6cf0f81b0e6e8cc57"} err="failed to get container status \"7e16a94e9e0bcc5abcf2df7bdd08fa6e8ab5ce3d21312cd6cf0f81b0e6e8cc57\": rpc error: code = NotFound desc = could not find container \"7e16a94e9e0bcc5abcf2df7bdd08fa6e8ab5ce3d21312cd6cf0f81b0e6e8cc57\": container with ID starting with 7e16a94e9e0bcc5abcf2df7bdd08fa6e8ab5ce3d21312cd6cf0f81b0e6e8cc57 not found: ID does not exist" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.645969 4772 scope.go:117] "RemoveContainer" containerID="14f3f0c8458d40e1d6623419c5cef6158374249a8bc8d35828d0ae3dc98cc900" Nov 28 11:22:06 crc kubenswrapper[4772]: E1128 11:22:06.646324 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14f3f0c8458d40e1d6623419c5cef6158374249a8bc8d35828d0ae3dc98cc900\": container with ID starting with 14f3f0c8458d40e1d6623419c5cef6158374249a8bc8d35828d0ae3dc98cc900 not found: ID does not exist" containerID="14f3f0c8458d40e1d6623419c5cef6158374249a8bc8d35828d0ae3dc98cc900" Nov 28 11:22:06 crc kubenswrapper[4772]: I1128 11:22:06.646376 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f3f0c8458d40e1d6623419c5cef6158374249a8bc8d35828d0ae3dc98cc900"} err="failed to get container status \"14f3f0c8458d40e1d6623419c5cef6158374249a8bc8d35828d0ae3dc98cc900\": rpc error: code = NotFound desc = could not find container \"14f3f0c8458d40e1d6623419c5cef6158374249a8bc8d35828d0ae3dc98cc900\": container with ID starting with 14f3f0c8458d40e1d6623419c5cef6158374249a8bc8d35828d0ae3dc98cc900 not found: ID does not exist" Nov 28 11:22:06 crc kubenswrapper[4772]: E1128 11:22:06.996562 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" podUID="2ce66d6e-19b8-41e7-890d-f17f4be5a920" Nov 28 11:22:08 crc kubenswrapper[4772]: I1128 11:22:08.039890 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" path="/var/lib/kubelet/pods/655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb/volumes" Nov 28 11:22:09 crc kubenswrapper[4772]: I1128 11:22:09.405924 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:22:09 crc kubenswrapper[4772]: I1128 11:22:09.405980 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:22:09 crc kubenswrapper[4772]: I1128 11:22:09.445844 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:22:15 crc kubenswrapper[4772]: I1128 11:22:15.655223 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-c9nnk" Nov 28 11:22:16 crc kubenswrapper[4772]: I1128 11:22:16.657816 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" event={"ID":"6e95de97-8ad3-493a-a98f-5541e23ca701","Type":"ContainerStarted","Data":"a7fb8587756e9a21a0f31a91ec100009d8814717f236e1ecce522ea1a9e9434e"} Nov 28 11:22:16 crc kubenswrapper[4772]: I1128 11:22:16.681729 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ldrjz" podStartSLOduration=2.347643813 podStartE2EDuration="1m6.681701524s" podCreationTimestamp="2025-11-28 11:21:10 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.727781566 +0000 UTC m=+870.051024793" lastFinishedPulling="2025-11-28 11:22:16.061839277 +0000 UTC m=+934.385082504" observedRunningTime="2025-11-28 11:22:16.675441633 +0000 UTC m=+934.998684900" watchObservedRunningTime="2025-11-28 11:22:16.681701524 +0000 UTC m=+935.004944751" Nov 28 11:22:18 crc kubenswrapper[4772]: I1128 11:22:18.677615 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" event={"ID":"79791884-38fa-4d4e-ace2-cd02b0df26ab","Type":"ContainerStarted","Data":"80f03e3fee328278b0e1d4e2db266dc306708be59f13c87901fa54b9a8ed16fc"} Nov 28 11:22:18 crc kubenswrapper[4772]: I1128 11:22:18.677993 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" Nov 28 11:22:18 crc kubenswrapper[4772]: I1128 11:22:18.680395 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" event={"ID":"7f63617e-c125-40e3-a273-4180f7d8d45c","Type":"ContainerStarted","Data":"7aa05444ba2cfdd92eaffd395da517cdb01d2acac5cf9349aeb455ac6a8fe9d5"} Nov 28 11:22:18 crc kubenswrapper[4772]: I1128 11:22:18.680551 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" Nov 28 11:22:18 crc kubenswrapper[4772]: I1128 11:22:18.696034 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" podStartSLOduration=3.706049404 podStartE2EDuration="1m9.696019904s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.606923323 +0000 UTC m=+869.930166550" lastFinishedPulling="2025-11-28 11:22:17.596893783 +0000 UTC m=+935.920137050" observedRunningTime="2025-11-28 11:22:18.691912418 +0000 UTC m=+937.015155645" watchObservedRunningTime="2025-11-28 11:22:18.696019904 +0000 UTC m=+937.019263131" Nov 28 11:22:18 crc kubenswrapper[4772]: I1128 11:22:18.710660 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" podStartSLOduration=3.738245996 podStartE2EDuration="1m9.710644802s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.625463072 +0000 UTC m=+869.948706289" lastFinishedPulling="2025-11-28 11:22:17.597861828 +0000 UTC m=+935.921105095" observedRunningTime="2025-11-28 11:22:18.706201257 +0000 UTC m=+937.029444524" watchObservedRunningTime="2025-11-28 11:22:18.710644802 +0000 UTC m=+937.033888029" Nov 28 11:22:19 crc kubenswrapper[4772]: I1128 11:22:19.473390 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:22:19 crc kubenswrapper[4772]: I1128 11:22:19.544329 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jtxxv"] Nov 28 11:22:19 crc kubenswrapper[4772]: I1128 11:22:19.687444 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jtxxv" podUID="e4eefb75-1856-4ff1-92d0-f1c982983912" containerName="registry-server" containerID="cri-o://8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a" gracePeriod=2 Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.129175 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.239755 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4eefb75-1856-4ff1-92d0-f1c982983912-catalog-content\") pod \"e4eefb75-1856-4ff1-92d0-f1c982983912\" (UID: \"e4eefb75-1856-4ff1-92d0-f1c982983912\") " Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.239855 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz9g2\" (UniqueName: \"kubernetes.io/projected/e4eefb75-1856-4ff1-92d0-f1c982983912-kube-api-access-qz9g2\") pod \"e4eefb75-1856-4ff1-92d0-f1c982983912\" (UID: \"e4eefb75-1856-4ff1-92d0-f1c982983912\") " Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.239914 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4eefb75-1856-4ff1-92d0-f1c982983912-utilities\") pod \"e4eefb75-1856-4ff1-92d0-f1c982983912\" (UID: \"e4eefb75-1856-4ff1-92d0-f1c982983912\") " Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.241488 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4eefb75-1856-4ff1-92d0-f1c982983912-utilities" (OuterVolumeSpecName: "utilities") pod "e4eefb75-1856-4ff1-92d0-f1c982983912" (UID: "e4eefb75-1856-4ff1-92d0-f1c982983912"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.250767 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4eefb75-1856-4ff1-92d0-f1c982983912-kube-api-access-qz9g2" (OuterVolumeSpecName: "kube-api-access-qz9g2") pod "e4eefb75-1856-4ff1-92d0-f1c982983912" (UID: "e4eefb75-1856-4ff1-92d0-f1c982983912"). InnerVolumeSpecName "kube-api-access-qz9g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.325765 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4eefb75-1856-4ff1-92d0-f1c982983912-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4eefb75-1856-4ff1-92d0-f1c982983912" (UID: "e4eefb75-1856-4ff1-92d0-f1c982983912"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.341667 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4eefb75-1856-4ff1-92d0-f1c982983912-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.342144 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz9g2\" (UniqueName: \"kubernetes.io/projected/e4eefb75-1856-4ff1-92d0-f1c982983912-kube-api-access-qz9g2\") on node \"crc\" DevicePath \"\"" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.342465 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4eefb75-1856-4ff1-92d0-f1c982983912-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.696046 4772 generic.go:334] "Generic (PLEG): container finished" podID="e4eefb75-1856-4ff1-92d0-f1c982983912" containerID="8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a" exitCode=0 Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.696436 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtxxv" event={"ID":"e4eefb75-1856-4ff1-92d0-f1c982983912","Type":"ContainerDied","Data":"8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a"} Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.696468 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtxxv" event={"ID":"e4eefb75-1856-4ff1-92d0-f1c982983912","Type":"ContainerDied","Data":"c6c3509c936d01d50e935f4038d44d111252072a210419431c21e7f2b43ebc4f"} Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.696489 4772 scope.go:117] "RemoveContainer" containerID="8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.696632 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtxxv" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.719045 4772 scope.go:117] "RemoveContainer" containerID="b2ece7502d09d8cd45fa3fd17489168b5ed85cdc73f71789b448e2a3ad100c98" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.748209 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jtxxv"] Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.750691 4772 scope.go:117] "RemoveContainer" containerID="1ad9a10670e0298bf09301efd7886e049373c6db2519565de6f1a2d6c4183547" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.757504 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jtxxv"] Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.769216 4772 scope.go:117] "RemoveContainer" containerID="8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a" Nov 28 11:22:20 crc kubenswrapper[4772]: E1128 11:22:20.769754 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a\": container with ID starting with 8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a not found: ID does not exist" containerID="8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.769810 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a"} err="failed to get container status \"8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a\": rpc error: code = NotFound desc = could not find container \"8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a\": container with ID starting with 8599203ec6c6b93e063d67fcbfcc4a53846fdc5fbd03368be4ab1fa3b773ef0a not found: ID does not exist" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.769847 4772 scope.go:117] "RemoveContainer" containerID="b2ece7502d09d8cd45fa3fd17489168b5ed85cdc73f71789b448e2a3ad100c98" Nov 28 11:22:20 crc kubenswrapper[4772]: E1128 11:22:20.770147 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ece7502d09d8cd45fa3fd17489168b5ed85cdc73f71789b448e2a3ad100c98\": container with ID starting with b2ece7502d09d8cd45fa3fd17489168b5ed85cdc73f71789b448e2a3ad100c98 not found: ID does not exist" containerID="b2ece7502d09d8cd45fa3fd17489168b5ed85cdc73f71789b448e2a3ad100c98" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.770199 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ece7502d09d8cd45fa3fd17489168b5ed85cdc73f71789b448e2a3ad100c98"} err="failed to get container status \"b2ece7502d09d8cd45fa3fd17489168b5ed85cdc73f71789b448e2a3ad100c98\": rpc error: code = NotFound desc = could not find container \"b2ece7502d09d8cd45fa3fd17489168b5ed85cdc73f71789b448e2a3ad100c98\": container with ID starting with b2ece7502d09d8cd45fa3fd17489168b5ed85cdc73f71789b448e2a3ad100c98 not found: ID does not exist" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.770233 4772 scope.go:117] "RemoveContainer" containerID="1ad9a10670e0298bf09301efd7886e049373c6db2519565de6f1a2d6c4183547" Nov 28 11:22:20 crc kubenswrapper[4772]: E1128 11:22:20.770942 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ad9a10670e0298bf09301efd7886e049373c6db2519565de6f1a2d6c4183547\": container with ID starting with 1ad9a10670e0298bf09301efd7886e049373c6db2519565de6f1a2d6c4183547 not found: ID does not exist" containerID="1ad9a10670e0298bf09301efd7886e049373c6db2519565de6f1a2d6c4183547" Nov 28 11:22:20 crc kubenswrapper[4772]: I1128 11:22:20.770991 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ad9a10670e0298bf09301efd7886e049373c6db2519565de6f1a2d6c4183547"} err="failed to get container status \"1ad9a10670e0298bf09301efd7886e049373c6db2519565de6f1a2d6c4183547\": rpc error: code = NotFound desc = could not find container \"1ad9a10670e0298bf09301efd7886e049373c6db2519565de6f1a2d6c4183547\": container with ID starting with 1ad9a10670e0298bf09301efd7886e049373c6db2519565de6f1a2d6c4183547 not found: ID does not exist" Nov 28 11:22:22 crc kubenswrapper[4772]: I1128 11:22:22.011786 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4eefb75-1856-4ff1-92d0-f1c982983912" path="/var/lib/kubelet/pods/e4eefb75-1856-4ff1-92d0-f1c982983912/volumes" Nov 28 11:22:22 crc kubenswrapper[4772]: I1128 11:22:22.726094 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" event={"ID":"2ce66d6e-19b8-41e7-890d-f17f4be5a920","Type":"ContainerStarted","Data":"cc717767b541be4f4ed1d072115dab6a3d63c9767ebe37dec8c1adc2ec340ba5"} Nov 28 11:22:22 crc kubenswrapper[4772]: I1128 11:22:22.726489 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" Nov 28 11:22:22 crc kubenswrapper[4772]: I1128 11:22:22.755071 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" podStartSLOduration=3.7669092280000003 podStartE2EDuration="1m13.755035169s" podCreationTimestamp="2025-11-28 11:21:09 +0000 UTC" firstStartedPulling="2025-11-28 11:21:11.619294863 +0000 UTC m=+869.942538090" lastFinishedPulling="2025-11-28 11:22:21.607420794 +0000 UTC m=+939.930664031" observedRunningTime="2025-11-28 11:22:22.746993171 +0000 UTC m=+941.070236398" watchObservedRunningTime="2025-11-28 11:22:22.755035169 +0000 UTC m=+941.078278426" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.409290 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8t8hs"] Nov 28 11:22:27 crc kubenswrapper[4772]: E1128 11:22:27.410237 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4eefb75-1856-4ff1-92d0-f1c982983912" containerName="extract-utilities" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.410254 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4eefb75-1856-4ff1-92d0-f1c982983912" containerName="extract-utilities" Nov 28 11:22:27 crc kubenswrapper[4772]: E1128 11:22:27.410295 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" containerName="extract-content" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.410301 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" containerName="extract-content" Nov 28 11:22:27 crc kubenswrapper[4772]: E1128 11:22:27.410313 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4eefb75-1856-4ff1-92d0-f1c982983912" containerName="extract-content" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.410321 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4eefb75-1856-4ff1-92d0-f1c982983912" containerName="extract-content" Nov 28 11:22:27 crc kubenswrapper[4772]: E1128 11:22:27.410334 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac39922-73a2-4092-b8f3-eaf6bddb58c1" containerName="extract-utilities" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.410341 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac39922-73a2-4092-b8f3-eaf6bddb58c1" containerName="extract-utilities" Nov 28 11:22:27 crc kubenswrapper[4772]: E1128 11:22:27.410386 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac39922-73a2-4092-b8f3-eaf6bddb58c1" containerName="registry-server" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.410393 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac39922-73a2-4092-b8f3-eaf6bddb58c1" containerName="registry-server" Nov 28 11:22:27 crc kubenswrapper[4772]: E1128 11:22:27.410409 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" containerName="registry-server" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.410416 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" containerName="registry-server" Nov 28 11:22:27 crc kubenswrapper[4772]: E1128 11:22:27.410437 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4eefb75-1856-4ff1-92d0-f1c982983912" containerName="registry-server" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.410443 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4eefb75-1856-4ff1-92d0-f1c982983912" containerName="registry-server" Nov 28 11:22:27 crc kubenswrapper[4772]: E1128 11:22:27.410453 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" containerName="extract-utilities" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.410459 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" containerName="extract-utilities" Nov 28 11:22:27 crc kubenswrapper[4772]: E1128 11:22:27.410472 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac39922-73a2-4092-b8f3-eaf6bddb58c1" containerName="extract-content" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.410478 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac39922-73a2-4092-b8f3-eaf6bddb58c1" containerName="extract-content" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.410656 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4eefb75-1856-4ff1-92d0-f1c982983912" containerName="registry-server" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.410672 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac39922-73a2-4092-b8f3-eaf6bddb58c1" containerName="registry-server" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.410680 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="655ab5a3-cb75-44dc-b2fc-6e9c1b7294eb" containerName="registry-server" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.411780 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.431150 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8t8hs"] Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.483038 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f955c880-ba81-42fa-8402-3a9901957184-utilities\") pod \"redhat-operators-8t8hs\" (UID: \"f955c880-ba81-42fa-8402-3a9901957184\") " pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.483124 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f955c880-ba81-42fa-8402-3a9901957184-catalog-content\") pod \"redhat-operators-8t8hs\" (UID: \"f955c880-ba81-42fa-8402-3a9901957184\") " pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.483220 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nqf5\" (UniqueName: \"kubernetes.io/projected/f955c880-ba81-42fa-8402-3a9901957184-kube-api-access-8nqf5\") pod \"redhat-operators-8t8hs\" (UID: \"f955c880-ba81-42fa-8402-3a9901957184\") " pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.584705 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nqf5\" (UniqueName: \"kubernetes.io/projected/f955c880-ba81-42fa-8402-3a9901957184-kube-api-access-8nqf5\") pod \"redhat-operators-8t8hs\" (UID: \"f955c880-ba81-42fa-8402-3a9901957184\") " pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.584786 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f955c880-ba81-42fa-8402-3a9901957184-utilities\") pod \"redhat-operators-8t8hs\" (UID: \"f955c880-ba81-42fa-8402-3a9901957184\") " pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.584851 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f955c880-ba81-42fa-8402-3a9901957184-catalog-content\") pod \"redhat-operators-8t8hs\" (UID: \"f955c880-ba81-42fa-8402-3a9901957184\") " pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.585672 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f955c880-ba81-42fa-8402-3a9901957184-catalog-content\") pod \"redhat-operators-8t8hs\" (UID: \"f955c880-ba81-42fa-8402-3a9901957184\") " pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.586022 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f955c880-ba81-42fa-8402-3a9901957184-utilities\") pod \"redhat-operators-8t8hs\" (UID: \"f955c880-ba81-42fa-8402-3a9901957184\") " pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.612394 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nqf5\" (UniqueName: \"kubernetes.io/projected/f955c880-ba81-42fa-8402-3a9901957184-kube-api-access-8nqf5\") pod \"redhat-operators-8t8hs\" (UID: \"f955c880-ba81-42fa-8402-3a9901957184\") " pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:27 crc kubenswrapper[4772]: I1128 11:22:27.737745 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:28 crc kubenswrapper[4772]: I1128 11:22:28.258557 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8t8hs"] Nov 28 11:22:28 crc kubenswrapper[4772]: I1128 11:22:28.807070 4772 generic.go:334] "Generic (PLEG): container finished" podID="f955c880-ba81-42fa-8402-3a9901957184" containerID="fe2786fb89df5b8ccbe3f3f965d9d3c8ab03c034e932830f9912d4bd41d7df1c" exitCode=0 Nov 28 11:22:28 crc kubenswrapper[4772]: I1128 11:22:28.807115 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t8hs" event={"ID":"f955c880-ba81-42fa-8402-3a9901957184","Type":"ContainerDied","Data":"fe2786fb89df5b8ccbe3f3f965d9d3c8ab03c034e932830f9912d4bd41d7df1c"} Nov 28 11:22:28 crc kubenswrapper[4772]: I1128 11:22:28.807177 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t8hs" event={"ID":"f955c880-ba81-42fa-8402-3a9901957184","Type":"ContainerStarted","Data":"5167c36a0daac1f2cdd14a2411c5dec1054dd99cbc0712bf1808fc3660a56fab"} Nov 28 11:22:29 crc kubenswrapper[4772]: I1128 11:22:29.818563 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t8hs" event={"ID":"f955c880-ba81-42fa-8402-3a9901957184","Type":"ContainerStarted","Data":"7ba1f49299474e8bd8b8c65a9c81f1bd55b46da6f31c6f5ba5d3aa53ec6165b7"} Nov 28 11:22:30 crc kubenswrapper[4772]: I1128 11:22:30.361688 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-k7z7p" Nov 28 11:22:30 crc kubenswrapper[4772]: I1128 11:22:30.418330 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-2mc7h" Nov 28 11:22:30 crc kubenswrapper[4772]: I1128 11:22:30.448342 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d77b94747-pqx6v" Nov 28 11:22:30 crc kubenswrapper[4772]: I1128 11:22:30.833430 4772 generic.go:334] "Generic (PLEG): container finished" podID="f955c880-ba81-42fa-8402-3a9901957184" containerID="7ba1f49299474e8bd8b8c65a9c81f1bd55b46da6f31c6f5ba5d3aa53ec6165b7" exitCode=0 Nov 28 11:22:30 crc kubenswrapper[4772]: I1128 11:22:30.833494 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t8hs" event={"ID":"f955c880-ba81-42fa-8402-3a9901957184","Type":"ContainerDied","Data":"7ba1f49299474e8bd8b8c65a9c81f1bd55b46da6f31c6f5ba5d3aa53ec6165b7"} Nov 28 11:22:31 crc kubenswrapper[4772]: I1128 11:22:31.844977 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t8hs" event={"ID":"f955c880-ba81-42fa-8402-3a9901957184","Type":"ContainerStarted","Data":"39b0ec019bb13f98f85b62d91a2368e3f4e32dbeadc37e328a9ec350708979f7"} Nov 28 11:22:31 crc kubenswrapper[4772]: I1128 11:22:31.893989 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8t8hs" podStartSLOduration=2.371883828 podStartE2EDuration="4.89397056s" podCreationTimestamp="2025-11-28 11:22:27 +0000 UTC" firstStartedPulling="2025-11-28 11:22:28.809842765 +0000 UTC m=+947.133085992" lastFinishedPulling="2025-11-28 11:22:31.331929477 +0000 UTC m=+949.655172724" observedRunningTime="2025-11-28 11:22:31.892058821 +0000 UTC m=+950.215302068" watchObservedRunningTime="2025-11-28 11:22:31.89397056 +0000 UTC m=+950.217213797" Nov 28 11:22:37 crc kubenswrapper[4772]: I1128 11:22:37.738351 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:37 crc kubenswrapper[4772]: I1128 11:22:37.739445 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:37 crc kubenswrapper[4772]: I1128 11:22:37.798079 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:37 crc kubenswrapper[4772]: I1128 11:22:37.955420 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:38 crc kubenswrapper[4772]: I1128 11:22:38.058409 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8t8hs"] Nov 28 11:22:39 crc kubenswrapper[4772]: I1128 11:22:39.905750 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8t8hs" podUID="f955c880-ba81-42fa-8402-3a9901957184" containerName="registry-server" containerID="cri-o://39b0ec019bb13f98f85b62d91a2368e3f4e32dbeadc37e328a9ec350708979f7" gracePeriod=2 Nov 28 11:22:41 crc kubenswrapper[4772]: I1128 11:22:41.039431 4772 generic.go:334] "Generic (PLEG): container finished" podID="f955c880-ba81-42fa-8402-3a9901957184" containerID="39b0ec019bb13f98f85b62d91a2368e3f4e32dbeadc37e328a9ec350708979f7" exitCode=0 Nov 28 11:22:41 crc kubenswrapper[4772]: I1128 11:22:41.039555 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t8hs" event={"ID":"f955c880-ba81-42fa-8402-3a9901957184","Type":"ContainerDied","Data":"39b0ec019bb13f98f85b62d91a2368e3f4e32dbeadc37e328a9ec350708979f7"} Nov 28 11:22:41 crc kubenswrapper[4772]: I1128 11:22:41.100526 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:41 crc kubenswrapper[4772]: I1128 11:22:41.202520 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f955c880-ba81-42fa-8402-3a9901957184-catalog-content\") pod \"f955c880-ba81-42fa-8402-3a9901957184\" (UID: \"f955c880-ba81-42fa-8402-3a9901957184\") " Nov 28 11:22:41 crc kubenswrapper[4772]: I1128 11:22:41.202737 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f955c880-ba81-42fa-8402-3a9901957184-utilities\") pod \"f955c880-ba81-42fa-8402-3a9901957184\" (UID: \"f955c880-ba81-42fa-8402-3a9901957184\") " Nov 28 11:22:41 crc kubenswrapper[4772]: I1128 11:22:41.202784 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nqf5\" (UniqueName: \"kubernetes.io/projected/f955c880-ba81-42fa-8402-3a9901957184-kube-api-access-8nqf5\") pod \"f955c880-ba81-42fa-8402-3a9901957184\" (UID: \"f955c880-ba81-42fa-8402-3a9901957184\") " Nov 28 11:22:41 crc kubenswrapper[4772]: I1128 11:22:41.204027 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f955c880-ba81-42fa-8402-3a9901957184-utilities" (OuterVolumeSpecName: "utilities") pod "f955c880-ba81-42fa-8402-3a9901957184" (UID: "f955c880-ba81-42fa-8402-3a9901957184"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:22:41 crc kubenswrapper[4772]: I1128 11:22:41.209876 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f955c880-ba81-42fa-8402-3a9901957184-kube-api-access-8nqf5" (OuterVolumeSpecName: "kube-api-access-8nqf5") pod "f955c880-ba81-42fa-8402-3a9901957184" (UID: "f955c880-ba81-42fa-8402-3a9901957184"). InnerVolumeSpecName "kube-api-access-8nqf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:22:41 crc kubenswrapper[4772]: I1128 11:22:41.304927 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f955c880-ba81-42fa-8402-3a9901957184-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:22:41 crc kubenswrapper[4772]: I1128 11:22:41.304961 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nqf5\" (UniqueName: \"kubernetes.io/projected/f955c880-ba81-42fa-8402-3a9901957184-kube-api-access-8nqf5\") on node \"crc\" DevicePath \"\"" Nov 28 11:22:41 crc kubenswrapper[4772]: I1128 11:22:41.753151 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f955c880-ba81-42fa-8402-3a9901957184-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f955c880-ba81-42fa-8402-3a9901957184" (UID: "f955c880-ba81-42fa-8402-3a9901957184"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:22:41 crc kubenswrapper[4772]: I1128 11:22:41.812929 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f955c880-ba81-42fa-8402-3a9901957184-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:22:42 crc kubenswrapper[4772]: I1128 11:22:42.049216 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t8hs" event={"ID":"f955c880-ba81-42fa-8402-3a9901957184","Type":"ContainerDied","Data":"5167c36a0daac1f2cdd14a2411c5dec1054dd99cbc0712bf1808fc3660a56fab"} Nov 28 11:22:42 crc kubenswrapper[4772]: I1128 11:22:42.049262 4772 scope.go:117] "RemoveContainer" containerID="39b0ec019bb13f98f85b62d91a2368e3f4e32dbeadc37e328a9ec350708979f7" Nov 28 11:22:42 crc kubenswrapper[4772]: I1128 11:22:42.049393 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8t8hs" Nov 28 11:22:42 crc kubenswrapper[4772]: I1128 11:22:42.078923 4772 scope.go:117] "RemoveContainer" containerID="7ba1f49299474e8bd8b8c65a9c81f1bd55b46da6f31c6f5ba5d3aa53ec6165b7" Nov 28 11:22:42 crc kubenswrapper[4772]: I1128 11:22:42.080919 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8t8hs"] Nov 28 11:22:42 crc kubenswrapper[4772]: I1128 11:22:42.086253 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8t8hs"] Nov 28 11:22:42 crc kubenswrapper[4772]: I1128 11:22:42.108793 4772 scope.go:117] "RemoveContainer" containerID="fe2786fb89df5b8ccbe3f3f965d9d3c8ab03c034e932830f9912d4bd41d7df1c" Nov 28 11:22:44 crc kubenswrapper[4772]: I1128 11:22:44.003904 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f955c880-ba81-42fa-8402-3a9901957184" path="/var/lib/kubelet/pods/f955c880-ba81-42fa-8402-3a9901957184/volumes" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.362239 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h4kmr"] Nov 28 11:22:48 crc kubenswrapper[4772]: E1128 11:22:48.363106 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f955c880-ba81-42fa-8402-3a9901957184" containerName="registry-server" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.363118 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f955c880-ba81-42fa-8402-3a9901957184" containerName="registry-server" Nov 28 11:22:48 crc kubenswrapper[4772]: E1128 11:22:48.363127 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f955c880-ba81-42fa-8402-3a9901957184" containerName="extract-content" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.363133 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f955c880-ba81-42fa-8402-3a9901957184" containerName="extract-content" Nov 28 11:22:48 crc kubenswrapper[4772]: E1128 11:22:48.363151 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f955c880-ba81-42fa-8402-3a9901957184" containerName="extract-utilities" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.363157 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f955c880-ba81-42fa-8402-3a9901957184" containerName="extract-utilities" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.363288 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f955c880-ba81-42fa-8402-3a9901957184" containerName="registry-server" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.363977 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.366438 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.367265 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jpb8k" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.367430 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.374200 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.377541 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h4kmr"] Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.415277 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc-config\") pod \"dnsmasq-dns-675f4bcbfc-h4kmr\" (UID: \"9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.416067 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjcvg\" (UniqueName: \"kubernetes.io/projected/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc-kube-api-access-wjcvg\") pod \"dnsmasq-dns-675f4bcbfc-h4kmr\" (UID: \"9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.426822 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-q7hmz"] Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.436007 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.438209 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.454869 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-q7hmz"] Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.518325 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-config\") pod \"dnsmasq-dns-78dd6ddcc-q7hmz\" (UID: \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.518406 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc-config\") pod \"dnsmasq-dns-675f4bcbfc-h4kmr\" (UID: \"9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.518442 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjcvg\" (UniqueName: \"kubernetes.io/projected/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc-kube-api-access-wjcvg\") pod \"dnsmasq-dns-675f4bcbfc-h4kmr\" (UID: \"9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.518486 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7b5g\" (UniqueName: \"kubernetes.io/projected/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-kube-api-access-z7b5g\") pod \"dnsmasq-dns-78dd6ddcc-q7hmz\" (UID: \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.518505 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-q7hmz\" (UID: \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.519313 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc-config\") pod \"dnsmasq-dns-675f4bcbfc-h4kmr\" (UID: \"9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.550615 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjcvg\" (UniqueName: \"kubernetes.io/projected/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc-kube-api-access-wjcvg\") pod \"dnsmasq-dns-675f4bcbfc-h4kmr\" (UID: \"9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.619574 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7b5g\" (UniqueName: \"kubernetes.io/projected/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-kube-api-access-z7b5g\") pod \"dnsmasq-dns-78dd6ddcc-q7hmz\" (UID: \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.619647 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-q7hmz\" (UID: \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.619693 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-config\") pod \"dnsmasq-dns-78dd6ddcc-q7hmz\" (UID: \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.620762 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-config\") pod \"dnsmasq-dns-78dd6ddcc-q7hmz\" (UID: \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.621271 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-q7hmz\" (UID: \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.642411 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7b5g\" (UniqueName: \"kubernetes.io/projected/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-kube-api-access-z7b5g\") pod \"dnsmasq-dns-78dd6ddcc-q7hmz\" (UID: \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.683267 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" Nov 28 11:22:48 crc kubenswrapper[4772]: I1128 11:22:48.754697 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.224254 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h4kmr"] Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.276432 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-q7hmz"] Nov 28 11:22:49 crc kubenswrapper[4772]: W1128 11:22:49.277111 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf42ec8c4_f983_4c21_987c_d7f2ec23ff1c.slice/crio-0d6bd8fd120b2852d4c7e475f90d58aa668fcf949d4b62f86900408ba183dfb3 WatchSource:0}: Error finding container 0d6bd8fd120b2852d4c7e475f90d58aa668fcf949d4b62f86900408ba183dfb3: Status 404 returned error can't find the container with id 0d6bd8fd120b2852d4c7e475f90d58aa668fcf949d4b62f86900408ba183dfb3 Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.709426 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h4kmr"] Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.729573 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-nfhhm"] Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.731132 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.754405 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-nfhhm"] Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.870143 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c022176-398c-4a4f-af82-afd5b81847b2-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-nfhhm\" (UID: \"5c022176-398c-4a4f-af82-afd5b81847b2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.870203 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7lq\" (UniqueName: \"kubernetes.io/projected/5c022176-398c-4a4f-af82-afd5b81847b2-kube-api-access-mg7lq\") pod \"dnsmasq-dns-5ccc8479f9-nfhhm\" (UID: \"5c022176-398c-4a4f-af82-afd5b81847b2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.870280 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c022176-398c-4a4f-af82-afd5b81847b2-config\") pod \"dnsmasq-dns-5ccc8479f9-nfhhm\" (UID: \"5c022176-398c-4a4f-af82-afd5b81847b2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.971824 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c022176-398c-4a4f-af82-afd5b81847b2-config\") pod \"dnsmasq-dns-5ccc8479f9-nfhhm\" (UID: \"5c022176-398c-4a4f-af82-afd5b81847b2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.972006 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c022176-398c-4a4f-af82-afd5b81847b2-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-nfhhm\" (UID: \"5c022176-398c-4a4f-af82-afd5b81847b2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.972062 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg7lq\" (UniqueName: \"kubernetes.io/projected/5c022176-398c-4a4f-af82-afd5b81847b2-kube-api-access-mg7lq\") pod \"dnsmasq-dns-5ccc8479f9-nfhhm\" (UID: \"5c022176-398c-4a4f-af82-afd5b81847b2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.973207 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c022176-398c-4a4f-af82-afd5b81847b2-config\") pod \"dnsmasq-dns-5ccc8479f9-nfhhm\" (UID: \"5c022176-398c-4a4f-af82-afd5b81847b2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.973295 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c022176-398c-4a4f-af82-afd5b81847b2-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-nfhhm\" (UID: \"5c022176-398c-4a4f-af82-afd5b81847b2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:22:49 crc kubenswrapper[4772]: I1128 11:22:49.993472 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg7lq\" (UniqueName: \"kubernetes.io/projected/5c022176-398c-4a4f-af82-afd5b81847b2-kube-api-access-mg7lq\") pod \"dnsmasq-dns-5ccc8479f9-nfhhm\" (UID: \"5c022176-398c-4a4f-af82-afd5b81847b2\") " pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.054473 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.127658 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" event={"ID":"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c","Type":"ContainerStarted","Data":"0d6bd8fd120b2852d4c7e475f90d58aa668fcf949d4b62f86900408ba183dfb3"} Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.129980 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" event={"ID":"9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc","Type":"ContainerStarted","Data":"a95131779233e7cb7cf2cd97bc865e769c2a0676f7ee6671f7a4891e18d936dd"} Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.236570 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-q7hmz"] Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.275878 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mchqv"] Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.279149 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.296349 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mchqv"] Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.379052 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mchqv\" (UID: \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\") " pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.379446 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-config\") pod \"dnsmasq-dns-57d769cc4f-mchqv\" (UID: \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\") " pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.379468 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcpfh\" (UniqueName: \"kubernetes.io/projected/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-kube-api-access-fcpfh\") pod \"dnsmasq-dns-57d769cc4f-mchqv\" (UID: \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\") " pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.481897 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcpfh\" (UniqueName: \"kubernetes.io/projected/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-kube-api-access-fcpfh\") pod \"dnsmasq-dns-57d769cc4f-mchqv\" (UID: \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\") " pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.482016 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mchqv\" (UID: \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\") " pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.482097 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-config\") pod \"dnsmasq-dns-57d769cc4f-mchqv\" (UID: \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\") " pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.484181 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-config\") pod \"dnsmasq-dns-57d769cc4f-mchqv\" (UID: \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\") " pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.484722 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mchqv\" (UID: \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\") " pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.514387 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcpfh\" (UniqueName: \"kubernetes.io/projected/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-kube-api-access-fcpfh\") pod \"dnsmasq-dns-57d769cc4f-mchqv\" (UID: \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\") " pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.629318 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.723239 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-nfhhm"] Nov 28 11:22:50 crc kubenswrapper[4772]: W1128 11:22:50.740555 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c022176_398c_4a4f_af82_afd5b81847b2.slice/crio-0d638e68cefaaea189d6fbe1849fbd0fb609c4d2e51e12844db04ff84e9c7704 WatchSource:0}: Error finding container 0d638e68cefaaea189d6fbe1849fbd0fb609c4d2e51e12844db04ff84e9c7704: Status 404 returned error can't find the container with id 0d638e68cefaaea189d6fbe1849fbd0fb609c4d2e51e12844db04ff84e9c7704 Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.887218 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.888831 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.893915 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.893934 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.893948 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.894384 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.894485 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.894928 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mcv57" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.896946 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.900849 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.996613 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.996653 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.996675 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52b2f98f-d36f-4798-9903-1f75498cdb5b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.996702 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.996878 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.996908 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95v92\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-kube-api-access-95v92\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.996951 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.996987 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.997003 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.997060 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52b2f98f-d36f-4798-9903-1f75498cdb5b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:50 crc kubenswrapper[4772]: I1128 11:22:50.997137 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.098176 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.098220 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.098242 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95v92\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-kube-api-access-95v92\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.098273 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.098297 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.098312 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.098337 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52b2f98f-d36f-4798-9903-1f75498cdb5b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.098405 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.098435 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.098451 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.098470 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52b2f98f-d36f-4798-9903-1f75498cdb5b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.099206 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.100228 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.100509 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.102240 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.103333 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.103958 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.108007 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.108161 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52b2f98f-d36f-4798-9903-1f75498cdb5b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.121083 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.121529 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52b2f98f-d36f-4798-9903-1f75498cdb5b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.130266 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95v92\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-kube-api-access-95v92\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.161581 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.177456 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" event={"ID":"5c022176-398c-4a4f-af82-afd5b81847b2","Type":"ContainerStarted","Data":"0d638e68cefaaea189d6fbe1849fbd0fb609c4d2e51e12844db04ff84e9c7704"} Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.227655 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.293275 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mchqv"] Nov 28 11:22:51 crc kubenswrapper[4772]: W1128 11:22:51.319130 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ca2b953_45e9_4a19_9dda_8c76d7eeb213.slice/crio-355388bcbce1ba5ca6ec7dc94c4a46a70f65039fa3aab39ecf552ad2742be80c WatchSource:0}: Error finding container 355388bcbce1ba5ca6ec7dc94c4a46a70f65039fa3aab39ecf552ad2742be80c: Status 404 returned error can't find the container with id 355388bcbce1ba5ca6ec7dc94c4a46a70f65039fa3aab39ecf552ad2742be80c Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.416478 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.418087 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.421780 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2lkn4" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.422006 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.422183 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.422234 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.422326 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.422544 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.422690 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.456647 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.510228 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.510288 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r8n2\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-kube-api-access-8r8n2\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.510346 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.510823 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.510935 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.511006 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-config-data\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.511030 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d684e5b-88f5-4004-a176-22ae480daaa6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.511058 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.511086 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.511145 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d684e5b-88f5-4004-a176-22ae480daaa6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.511171 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.612474 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.612534 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-config-data\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.612552 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d684e5b-88f5-4004-a176-22ae480daaa6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.612572 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.612589 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.612617 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d684e5b-88f5-4004-a176-22ae480daaa6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.612631 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.612681 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.612699 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r8n2\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-kube-api-access-8r8n2\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.612724 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.612761 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.612991 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.613602 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-config-data\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.613920 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.613954 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.616325 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.616646 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.621842 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.621931 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d684e5b-88f5-4004-a176-22ae480daaa6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.622006 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.630437 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d684e5b-88f5-4004-a176-22ae480daaa6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.632118 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r8n2\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-kube-api-access-8r8n2\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.636280 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.756742 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 11:22:51 crc kubenswrapper[4772]: I1128 11:22:51.830340 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 11:22:52 crc kubenswrapper[4772]: I1128 11:22:52.194158 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52b2f98f-d36f-4798-9903-1f75498cdb5b","Type":"ContainerStarted","Data":"c9bb787459e31b94e90e0d29c567b5042db7e462d37f4ef8565dbf001daba98c"} Nov 28 11:22:52 crc kubenswrapper[4772]: I1128 11:22:52.198539 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" event={"ID":"6ca2b953-45e9-4a19-9dda-8c76d7eeb213","Type":"ContainerStarted","Data":"355388bcbce1ba5ca6ec7dc94c4a46a70f65039fa3aab39ecf552ad2742be80c"} Nov 28 11:22:52 crc kubenswrapper[4772]: I1128 11:22:52.366369 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.138771 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.141456 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.150588 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.150838 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rx4n8" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.150963 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.157103 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.160163 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.163722 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.230568 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4d684e5b-88f5-4004-a176-22ae480daaa6","Type":"ContainerStarted","Data":"1b094406ebe0ce115e1533d81314e7786d1ca1cfeb129ab2d8ea32bcb3ccda82"} Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.263247 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.263315 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.263351 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.263399 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn87s\" (UniqueName: \"kubernetes.io/projected/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-kube-api-access-mn87s\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.263426 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.263450 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.263478 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.263510 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.365284 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.365353 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.365404 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.365424 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn87s\" (UniqueName: \"kubernetes.io/projected/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-kube-api-access-mn87s\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.365461 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.365482 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.365503 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.365546 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.370315 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.370759 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.382611 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.383353 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.383836 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.403376 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.428260 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.435567 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.456648 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn87s\" (UniqueName: \"kubernetes.io/projected/ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20-kube-api-access-mn87s\") pod \"openstack-galera-0\" (UID: \"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20\") " pod="openstack/openstack-galera-0" Nov 28 11:22:53 crc kubenswrapper[4772]: I1128 11:22:53.507640 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.137381 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.257515 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20","Type":"ContainerStarted","Data":"b6fc87d553d3ba546f0a4b30fb6bc60011dc20431414ba97aad96cf840ff5301"} Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.611205 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.612859 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.617022 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.617581 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-t2krt" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.627841 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.666985 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.667839 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.689708 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbc92675-93c6-4d66-afb0-d83636cbf853-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.689755 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fbc92675-93c6-4d66-afb0-d83636cbf853-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.689784 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.689814 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fbc92675-93c6-4d66-afb0-d83636cbf853-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.689858 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btflw\" (UniqueName: \"kubernetes.io/projected/fbc92675-93c6-4d66-afb0-d83636cbf853-kube-api-access-btflw\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.689929 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fbc92675-93c6-4d66-afb0-d83636cbf853-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.690410 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbc92675-93c6-4d66-afb0-d83636cbf853-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.690454 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc92675-93c6-4d66-afb0-d83636cbf853-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.788792 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.791459 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.792071 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btflw\" (UniqueName: \"kubernetes.io/projected/fbc92675-93c6-4d66-afb0-d83636cbf853-kube-api-access-btflw\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.792159 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fbc92675-93c6-4d66-afb0-d83636cbf853-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.792192 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbc92675-93c6-4d66-afb0-d83636cbf853-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.792217 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc92675-93c6-4d66-afb0-d83636cbf853-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.792294 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbc92675-93c6-4d66-afb0-d83636cbf853-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.792325 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fbc92675-93c6-4d66-afb0-d83636cbf853-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.792396 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.792440 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fbc92675-93c6-4d66-afb0-d83636cbf853-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.793520 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fbc92675-93c6-4d66-afb0-d83636cbf853-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.798277 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.799426 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.799823 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6kml2" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.800704 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc92675-93c6-4d66-afb0-d83636cbf853-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.801008 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbc92675-93c6-4d66-afb0-d83636cbf853-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.801202 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fbc92675-93c6-4d66-afb0-d83636cbf853-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.801595 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.802395 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbc92675-93c6-4d66-afb0-d83636cbf853-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.802587 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.804322 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fbc92675-93c6-4d66-afb0-d83636cbf853-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.827785 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.854604 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btflw\" (UniqueName: \"kubernetes.io/projected/fbc92675-93c6-4d66-afb0-d83636cbf853-kube-api-access-btflw\") pod \"openstack-cell1-galera-0\" (UID: \"fbc92675-93c6-4d66-afb0-d83636cbf853\") " pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.981151 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.996712 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e491cd-f369-412c-9b41-77844ff3057d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.996799 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e491cd-f369-412c-9b41-77844ff3057d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.996838 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzs9r\" (UniqueName: \"kubernetes.io/projected/f9e491cd-f369-412c-9b41-77844ff3057d-kube-api-access-xzs9r\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.996933 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9e491cd-f369-412c-9b41-77844ff3057d-config-data\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:54 crc kubenswrapper[4772]: I1128 11:22:54.996975 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f9e491cd-f369-412c-9b41-77844ff3057d-kolla-config\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.100161 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzs9r\" (UniqueName: \"kubernetes.io/projected/f9e491cd-f369-412c-9b41-77844ff3057d-kube-api-access-xzs9r\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.100237 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9e491cd-f369-412c-9b41-77844ff3057d-config-data\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.100266 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f9e491cd-f369-412c-9b41-77844ff3057d-kolla-config\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.100321 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e491cd-f369-412c-9b41-77844ff3057d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.100404 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e491cd-f369-412c-9b41-77844ff3057d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.101524 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9e491cd-f369-412c-9b41-77844ff3057d-config-data\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.102269 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f9e491cd-f369-412c-9b41-77844ff3057d-kolla-config\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.107789 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e491cd-f369-412c-9b41-77844ff3057d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.114847 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e491cd-f369-412c-9b41-77844ff3057d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.120167 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzs9r\" (UniqueName: \"kubernetes.io/projected/f9e491cd-f369-412c-9b41-77844ff3057d-kube-api-access-xzs9r\") pod \"memcached-0\" (UID: \"f9e491cd-f369-412c-9b41-77844ff3057d\") " pod="openstack/memcached-0" Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.205688 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.569936 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 11:22:55 crc kubenswrapper[4772]: I1128 11:22:55.822925 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 11:22:55 crc kubenswrapper[4772]: W1128 11:22:55.863764 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9e491cd_f369_412c_9b41_77844ff3057d.slice/crio-55027c1e189f47e576cb8114f81d259f0a828beb4ee2e0a7d691b6f5132b205d WatchSource:0}: Error finding container 55027c1e189f47e576cb8114f81d259f0a828beb4ee2e0a7d691b6f5132b205d: Status 404 returned error can't find the container with id 55027c1e189f47e576cb8114f81d259f0a828beb4ee2e0a7d691b6f5132b205d Nov 28 11:22:56 crc kubenswrapper[4772]: I1128 11:22:56.311522 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fbc92675-93c6-4d66-afb0-d83636cbf853","Type":"ContainerStarted","Data":"9d9be99908d1e85fbea69bb902fbb3c1d28b9755436f2dd136fce4394cf4349a"} Nov 28 11:22:56 crc kubenswrapper[4772]: I1128 11:22:56.315173 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f9e491cd-f369-412c-9b41-77844ff3057d","Type":"ContainerStarted","Data":"55027c1e189f47e576cb8114f81d259f0a828beb4ee2e0a7d691b6f5132b205d"} Nov 28 11:22:56 crc kubenswrapper[4772]: I1128 11:22:56.467994 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 11:22:56 crc kubenswrapper[4772]: I1128 11:22:56.468960 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 11:22:56 crc kubenswrapper[4772]: I1128 11:22:56.472600 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bjtfl" Nov 28 11:22:56 crc kubenswrapper[4772]: I1128 11:22:56.500903 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 11:22:56 crc kubenswrapper[4772]: I1128 11:22:56.529700 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65qzq\" (UniqueName: \"kubernetes.io/projected/064be676-f5a5-4ae0-9fce-c2103f169de8-kube-api-access-65qzq\") pod \"kube-state-metrics-0\" (UID: \"064be676-f5a5-4ae0-9fce-c2103f169de8\") " pod="openstack/kube-state-metrics-0" Nov 28 11:22:56 crc kubenswrapper[4772]: I1128 11:22:56.631819 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65qzq\" (UniqueName: \"kubernetes.io/projected/064be676-f5a5-4ae0-9fce-c2103f169de8-kube-api-access-65qzq\") pod \"kube-state-metrics-0\" (UID: \"064be676-f5a5-4ae0-9fce-c2103f169de8\") " pod="openstack/kube-state-metrics-0" Nov 28 11:22:56 crc kubenswrapper[4772]: I1128 11:22:56.684260 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65qzq\" (UniqueName: \"kubernetes.io/projected/064be676-f5a5-4ae0-9fce-c2103f169de8-kube-api-access-65qzq\") pod \"kube-state-metrics-0\" (UID: \"064be676-f5a5-4ae0-9fce-c2103f169de8\") " pod="openstack/kube-state-metrics-0" Nov 28 11:22:56 crc kubenswrapper[4772]: I1128 11:22:56.799125 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 11:22:57 crc kubenswrapper[4772]: I1128 11:22:57.804440 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.280895 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qzrrh"] Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.282341 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.286880 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-thvdq" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.287026 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.287155 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.302649 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qzrrh"] Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.306106 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-gjbm2"] Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.309067 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.320998 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gjbm2"] Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.331561 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42fee486-89c9-4f0e-9db6-ac695b62a588-var-run-ovn\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.331625 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42fee486-89c9-4f0e-9db6-ac695b62a588-scripts\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.331656 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42fee486-89c9-4f0e-9db6-ac695b62a588-var-log-ovn\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.331690 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sssvl\" (UniqueName: \"kubernetes.io/projected/42fee486-89c9-4f0e-9db6-ac695b62a588-kube-api-access-sssvl\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.331719 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42fee486-89c9-4f0e-9db6-ac695b62a588-var-run\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.331744 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42fee486-89c9-4f0e-9db6-ac695b62a588-combined-ca-bundle\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.331803 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/42fee486-89c9-4f0e-9db6-ac695b62a588-ovn-controller-tls-certs\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.416820 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.428278 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433032 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433388 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/42fee486-89c9-4f0e-9db6-ac695b62a588-ovn-controller-tls-certs\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433454 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/dbdbf695-f81d-431b-8330-6745cdbf9ab1-var-lib\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433497 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dbdbf695-f81d-431b-8330-6745cdbf9ab1-scripts\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433528 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdpj2\" (UniqueName: \"kubernetes.io/projected/dbdbf695-f81d-431b-8330-6745cdbf9ab1-kube-api-access-bdpj2\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433566 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dbdbf695-f81d-431b-8330-6745cdbf9ab1-var-run\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433595 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42fee486-89c9-4f0e-9db6-ac695b62a588-var-run-ovn\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433615 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42fee486-89c9-4f0e-9db6-ac695b62a588-scripts\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433660 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42fee486-89c9-4f0e-9db6-ac695b62a588-var-log-ovn\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433680 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/dbdbf695-f81d-431b-8330-6745cdbf9ab1-etc-ovs\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433702 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sssvl\" (UniqueName: \"kubernetes.io/projected/42fee486-89c9-4f0e-9db6-ac695b62a588-kube-api-access-sssvl\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433756 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42fee486-89c9-4f0e-9db6-ac695b62a588-var-run\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433774 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/dbdbf695-f81d-431b-8330-6745cdbf9ab1-var-log\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.433807 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42fee486-89c9-4f0e-9db6-ac695b62a588-combined-ca-bundle\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.434555 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42fee486-89c9-4f0e-9db6-ac695b62a588-var-run-ovn\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.435350 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42fee486-89c9-4f0e-9db6-ac695b62a588-var-log-ovn\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.435441 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.435711 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-pgvn5" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.435774 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.435402 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.440925 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42fee486-89c9-4f0e-9db6-ac695b62a588-var-run\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.443873 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.455123 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42fee486-89c9-4f0e-9db6-ac695b62a588-combined-ca-bundle\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.456556 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42fee486-89c9-4f0e-9db6-ac695b62a588-scripts\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.459324 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sssvl\" (UniqueName: \"kubernetes.io/projected/42fee486-89c9-4f0e-9db6-ac695b62a588-kube-api-access-sssvl\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.463154 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/42fee486-89c9-4f0e-9db6-ac695b62a588-ovn-controller-tls-certs\") pod \"ovn-controller-qzrrh\" (UID: \"42fee486-89c9-4f0e-9db6-ac695b62a588\") " pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535512 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535591 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535633 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535665 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535698 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/dbdbf695-f81d-431b-8330-6745cdbf9ab1-var-lib\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535750 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dbdbf695-f81d-431b-8330-6745cdbf9ab1-scripts\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535775 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdpj2\" (UniqueName: \"kubernetes.io/projected/dbdbf695-f81d-431b-8330-6745cdbf9ab1-kube-api-access-bdpj2\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535825 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535853 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dbdbf695-f81d-431b-8330-6745cdbf9ab1-var-run\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535902 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/dbdbf695-f81d-431b-8330-6745cdbf9ab1-etc-ovs\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535936 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535966 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/dbdbf695-f81d-431b-8330-6745cdbf9ab1-var-log\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.535999 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-config\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.536024 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpggm\" (UniqueName: \"kubernetes.io/projected/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-kube-api-access-jpggm\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.536395 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/dbdbf695-f81d-431b-8330-6745cdbf9ab1-var-lib\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.539149 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dbdbf695-f81d-431b-8330-6745cdbf9ab1-scripts\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.539633 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dbdbf695-f81d-431b-8330-6745cdbf9ab1-var-run\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.539763 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/dbdbf695-f81d-431b-8330-6745cdbf9ab1-etc-ovs\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.539854 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/dbdbf695-f81d-431b-8330-6745cdbf9ab1-var-log\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.570419 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdpj2\" (UniqueName: \"kubernetes.io/projected/dbdbf695-f81d-431b-8330-6745cdbf9ab1-kube-api-access-bdpj2\") pod \"ovn-controller-ovs-gjbm2\" (UID: \"dbdbf695-f81d-431b-8330-6745cdbf9ab1\") " pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.609123 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.638542 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.638812 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.638916 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.638970 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-config\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.638997 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpggm\" (UniqueName: \"kubernetes.io/projected/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-kube-api-access-jpggm\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.639023 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.639056 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.639086 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.639110 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.640395 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.642108 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-config\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.642388 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.647099 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.649168 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.649845 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.670736 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpggm\" (UniqueName: \"kubernetes.io/projected/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-kube-api-access-jpggm\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.671184 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934fc42a-8a76-4b95-9ad0-5e13fa47d1cb-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.679345 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb\") " pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:00 crc kubenswrapper[4772]: I1128 11:23:00.815999 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.371108 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.378695 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.385687 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.385846 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-hzt8w" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.385940 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.386216 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.387251 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.460831 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.461047 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.461134 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-config\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.461201 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nchh6\" (UniqueName: \"kubernetes.io/projected/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-kube-api-access-nchh6\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.461241 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.461333 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.461647 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.461690 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.563325 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.563387 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.563420 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.563438 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.563459 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-config\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.563483 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nchh6\" (UniqueName: \"kubernetes.io/projected/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-kube-api-access-nchh6\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.563499 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.563528 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.563760 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.564924 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.565143 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-config\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.565336 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.570594 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.572291 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.572308 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.589801 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nchh6\" (UniqueName: \"kubernetes.io/projected/ec35ef2e-e0de-4d42-9ab0-35033d549ac9-kube-api-access-nchh6\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.600547 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ec35ef2e-e0de-4d42-9ab0-35033d549ac9\") " pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:04 crc kubenswrapper[4772]: I1128 11:23:04.748992 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:05 crc kubenswrapper[4772]: W1128 11:23:05.770318 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod064be676_f5a5_4ae0_9fce_c2103f169de8.slice/crio-3fd9df72666bc0899509b8417fe31e14b1cc5a97d185f27d897b074dabbcf683 WatchSource:0}: Error finding container 3fd9df72666bc0899509b8417fe31e14b1cc5a97d185f27d897b074dabbcf683: Status 404 returned error can't find the container with id 3fd9df72666bc0899509b8417fe31e14b1cc5a97d185f27d897b074dabbcf683 Nov 28 11:23:06 crc kubenswrapper[4772]: I1128 11:23:06.467131 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"064be676-f5a5-4ae0-9fce-c2103f169de8","Type":"ContainerStarted","Data":"3fd9df72666bc0899509b8417fe31e14b1cc5a97d185f27d897b074dabbcf683"} Nov 28 11:23:17 crc kubenswrapper[4772]: E1128 11:23:17.456514 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 28 11:23:17 crc kubenswrapper[4772]: E1128 11:23:17.457289 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95v92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(52b2f98f-d36f-4798-9903-1f75498cdb5b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:23:17 crc kubenswrapper[4772]: E1128 11:23:17.458534 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="52b2f98f-d36f-4798-9903-1f75498cdb5b" Nov 28 11:23:17 crc kubenswrapper[4772]: E1128 11:23:17.481227 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 28 11:23:17 crc kubenswrapper[4772]: E1128 11:23:17.481500 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8r8n2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(4d684e5b-88f5-4004-a176-22ae480daaa6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:23:17 crc kubenswrapper[4772]: E1128 11:23:17.482740 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="4d684e5b-88f5-4004-a176-22ae480daaa6" Nov 28 11:23:17 crc kubenswrapper[4772]: E1128 11:23:17.494413 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 28 11:23:17 crc kubenswrapper[4772]: E1128 11:23:17.494705 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btflw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(fbc92675-93c6-4d66-afb0-d83636cbf853): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:23:17 crc kubenswrapper[4772]: E1128 11:23:17.496499 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="fbc92675-93c6-4d66-afb0-d83636cbf853" Nov 28 11:23:17 crc kubenswrapper[4772]: E1128 11:23:17.569615 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="52b2f98f-d36f-4798-9903-1f75498cdb5b" Nov 28 11:23:17 crc kubenswrapper[4772]: E1128 11:23:17.570035 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="4d684e5b-88f5-4004-a176-22ae480daaa6" Nov 28 11:23:17 crc kubenswrapper[4772]: E1128 11:23:17.570121 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="fbc92675-93c6-4d66-afb0-d83636cbf853" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.178565 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.178926 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nbh548h597h694h5d6h84h664h5b8h577hfbhc8hffhbch5cch547h9ch5fbh649hcfh674h576h6fh7fh554h596h665h587h65dh5cfhfdh57fhbq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xzs9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(f9e491cd-f369-412c-9b41-77844ff3057d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.180198 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="f9e491cd-f369-412c-9b41-77844ff3057d" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.578491 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="f9e491cd-f369-412c-9b41-77844ff3057d" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.867164 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.867345 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcpfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-mchqv_openstack(6ca2b953-45e9-4a19-9dda-8c76d7eeb213): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.868515 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" podUID="6ca2b953-45e9-4a19-9dda-8c76d7eeb213" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.894566 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.894827 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjcvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-h4kmr_openstack(9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.896384 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" podUID="9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.907825 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.908572 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mg7lq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-nfhhm_openstack(5c022176-398c-4a4f-af82-afd5b81847b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.910704 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" podUID="5c022176-398c-4a4f-af82-afd5b81847b2" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.931795 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.931990 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7b5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-q7hmz_openstack(f42ec8c4-f983-4c21-987c-d7f2ec23ff1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:23:18 crc kubenswrapper[4772]: E1128 11:23:18.933512 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" podUID="f42ec8c4-f983-4c21-987c-d7f2ec23ff1c" Nov 28 11:23:19 crc kubenswrapper[4772]: I1128 11:23:19.301170 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qzrrh"] Nov 28 11:23:19 crc kubenswrapper[4772]: I1128 11:23:19.493150 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 11:23:19 crc kubenswrapper[4772]: I1128 11:23:19.580978 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 11:23:19 crc kubenswrapper[4772]: I1128 11:23:19.590391 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ec35ef2e-e0de-4d42-9ab0-35033d549ac9","Type":"ContainerStarted","Data":"ecea46c9ef13d5e1117caddb9552ff8140db4d02e716c8e1036c0dbecd34d806"} Nov 28 11:23:19 crc kubenswrapper[4772]: I1128 11:23:19.592113 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzrrh" event={"ID":"42fee486-89c9-4f0e-9db6-ac695b62a588","Type":"ContainerStarted","Data":"c30236b2a9ee73046ba632d2a6f891fbf2bd1171c4fdbbdc151622a1229f7aaa"} Nov 28 11:23:19 crc kubenswrapper[4772]: E1128 11:23:19.594446 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" podUID="5c022176-398c-4a4f-af82-afd5b81847b2" Nov 28 11:23:19 crc kubenswrapper[4772]: E1128 11:23:19.594468 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" podUID="6ca2b953-45e9-4a19-9dda-8c76d7eeb213" Nov 28 11:23:19 crc kubenswrapper[4772]: W1128 11:23:19.645165 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod934fc42a_8a76_4b95_9ad0_5e13fa47d1cb.slice/crio-01efa7eca5f9ae7bdeb4a43bd1b26940875854150de7ee84f9836296aeec4b52 WatchSource:0}: Error finding container 01efa7eca5f9ae7bdeb4a43bd1b26940875854150de7ee84f9836296aeec4b52: Status 404 returned error can't find the container with id 01efa7eca5f9ae7bdeb4a43bd1b26940875854150de7ee84f9836296aeec4b52 Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.080254 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gjbm2"] Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.279796 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.299403 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.310886 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjcvg\" (UniqueName: \"kubernetes.io/projected/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc-kube-api-access-wjcvg\") pod \"9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc\" (UID: \"9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc\") " Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.310946 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-config\") pod \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\" (UID: \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\") " Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.311029 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc-config\") pod \"9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc\" (UID: \"9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc\") " Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.311081 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-dns-svc\") pod \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\" (UID: \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\") " Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.311188 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7b5g\" (UniqueName: \"kubernetes.io/projected/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-kube-api-access-z7b5g\") pod \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\" (UID: \"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c\") " Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.312115 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f42ec8c4-f983-4c21-987c-d7f2ec23ff1c" (UID: "f42ec8c4-f983-4c21-987c-d7f2ec23ff1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.312340 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-config" (OuterVolumeSpecName: "config") pod "f42ec8c4-f983-4c21-987c-d7f2ec23ff1c" (UID: "f42ec8c4-f983-4c21-987c-d7f2ec23ff1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.313355 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc-config" (OuterVolumeSpecName: "config") pod "9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc" (UID: "9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.321866 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc-kube-api-access-wjcvg" (OuterVolumeSpecName: "kube-api-access-wjcvg") pod "9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc" (UID: "9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc"). InnerVolumeSpecName "kube-api-access-wjcvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.324652 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-kube-api-access-z7b5g" (OuterVolumeSpecName: "kube-api-access-z7b5g") pod "f42ec8c4-f983-4c21-987c-d7f2ec23ff1c" (UID: "f42ec8c4-f983-4c21-987c-d7f2ec23ff1c"). InnerVolumeSpecName "kube-api-access-z7b5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.417885 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjcvg\" (UniqueName: \"kubernetes.io/projected/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc-kube-api-access-wjcvg\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.417931 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.417944 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.417955 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.417966 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7b5g\" (UniqueName: \"kubernetes.io/projected/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c-kube-api-access-z7b5g\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.603330 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.603297 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-q7hmz" event={"ID":"f42ec8c4-f983-4c21-987c-d7f2ec23ff1c","Type":"ContainerDied","Data":"0d6bd8fd120b2852d4c7e475f90d58aa668fcf949d4b62f86900408ba183dfb3"} Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.607432 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" event={"ID":"9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc","Type":"ContainerDied","Data":"a95131779233e7cb7cf2cd97bc865e769c2a0676f7ee6671f7a4891e18d936dd"} Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.607747 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h4kmr" Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.610353 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20","Type":"ContainerStarted","Data":"a1d148705ee21af84760f8538295835e036891c359745cf99e5feff25fc866ff"} Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.611916 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb","Type":"ContainerStarted","Data":"01efa7eca5f9ae7bdeb4a43bd1b26940875854150de7ee84f9836296aeec4b52"} Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.613246 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gjbm2" event={"ID":"dbdbf695-f81d-431b-8330-6745cdbf9ab1","Type":"ContainerStarted","Data":"0ccd62804622970701b780967859718e3b037879f2b22e681533a24f51cd7fd3"} Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.677912 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-q7hmz"] Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.687097 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-q7hmz"] Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.725815 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h4kmr"] Nov 28 11:23:20 crc kubenswrapper[4772]: I1128 11:23:20.732476 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h4kmr"] Nov 28 11:23:21 crc kubenswrapper[4772]: I1128 11:23:21.625825 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"064be676-f5a5-4ae0-9fce-c2103f169de8","Type":"ContainerStarted","Data":"e612c1a473ab5febd45a9d07b045fa1d6b2307e23acc0effe8dcfe569bc1a8b0"} Nov 28 11:23:21 crc kubenswrapper[4772]: I1128 11:23:21.661731 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.708941946 podStartE2EDuration="25.661701537s" podCreationTimestamp="2025-11-28 11:22:56 +0000 UTC" firstStartedPulling="2025-11-28 11:23:05.776450065 +0000 UTC m=+984.099693292" lastFinishedPulling="2025-11-28 11:23:20.729209656 +0000 UTC m=+999.052452883" observedRunningTime="2025-11-28 11:23:21.648039785 +0000 UTC m=+999.971283022" watchObservedRunningTime="2025-11-28 11:23:21.661701537 +0000 UTC m=+999.984944774" Nov 28 11:23:22 crc kubenswrapper[4772]: I1128 11:23:22.013067 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc" path="/var/lib/kubelet/pods/9fec78aa-56c4-4a07-be2b-5e8ddfd66bcc/volumes" Nov 28 11:23:22 crc kubenswrapper[4772]: I1128 11:23:22.013608 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f42ec8c4-f983-4c21-987c-d7f2ec23ff1c" path="/var/lib/kubelet/pods/f42ec8c4-f983-4c21-987c-d7f2ec23ff1c/volumes" Nov 28 11:23:22 crc kubenswrapper[4772]: I1128 11:23:22.635118 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 11:23:23 crc kubenswrapper[4772]: I1128 11:23:23.656597 4772 generic.go:334] "Generic (PLEG): container finished" podID="ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20" containerID="a1d148705ee21af84760f8538295835e036891c359745cf99e5feff25fc866ff" exitCode=0 Nov 28 11:23:23 crc kubenswrapper[4772]: I1128 11:23:23.656678 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20","Type":"ContainerDied","Data":"a1d148705ee21af84760f8538295835e036891c359745cf99e5feff25fc866ff"} Nov 28 11:23:23 crc kubenswrapper[4772]: I1128 11:23:23.896666 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:23:23 crc kubenswrapper[4772]: I1128 11:23:23.896917 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:23:24 crc kubenswrapper[4772]: I1128 11:23:24.671027 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ec35ef2e-e0de-4d42-9ab0-35033d549ac9","Type":"ContainerStarted","Data":"ff8177b57ecdc08e8f1d4aa25a5b56ac0d165779986a6efe8695039e9a59a9fb"} Nov 28 11:23:24 crc kubenswrapper[4772]: I1128 11:23:24.677725 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzrrh" event={"ID":"42fee486-89c9-4f0e-9db6-ac695b62a588","Type":"ContainerStarted","Data":"898ff8305d1a6c9ed9f751b04877c56b019d4ea695502d4183c65d649b27a028"} Nov 28 11:23:24 crc kubenswrapper[4772]: I1128 11:23:24.678489 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qzrrh" Nov 28 11:23:24 crc kubenswrapper[4772]: I1128 11:23:24.682626 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb","Type":"ContainerStarted","Data":"e8869c30b479b0daffab86dc3ea9e34cbdf4ba507422ea4f316e7b63e9843401"} Nov 28 11:23:24 crc kubenswrapper[4772]: I1128 11:23:24.686062 4772 generic.go:334] "Generic (PLEG): container finished" podID="dbdbf695-f81d-431b-8330-6745cdbf9ab1" containerID="8486153e15152850976bc0af0c3e7166fe2119838804bbfe84bbec1e5e069eb8" exitCode=0 Nov 28 11:23:24 crc kubenswrapper[4772]: I1128 11:23:24.686210 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gjbm2" event={"ID":"dbdbf695-f81d-431b-8330-6745cdbf9ab1","Type":"ContainerDied","Data":"8486153e15152850976bc0af0c3e7166fe2119838804bbfe84bbec1e5e069eb8"} Nov 28 11:23:24 crc kubenswrapper[4772]: I1128 11:23:24.694173 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20","Type":"ContainerStarted","Data":"c6bba1f5563cadfa2d4a9e7a0765aafc96e790240f72794ef5cd1554abcc8b1a"} Nov 28 11:23:24 crc kubenswrapper[4772]: I1128 11:23:24.715878 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qzrrh" podStartSLOduration=20.534172215 podStartE2EDuration="24.715859359s" podCreationTimestamp="2025-11-28 11:23:00 +0000 UTC" firstStartedPulling="2025-11-28 11:23:19.402518611 +0000 UTC m=+997.725761828" lastFinishedPulling="2025-11-28 11:23:23.584205735 +0000 UTC m=+1001.907448972" observedRunningTime="2025-11-28 11:23:24.70817513 +0000 UTC m=+1003.031418367" watchObservedRunningTime="2025-11-28 11:23:24.715859359 +0000 UTC m=+1003.039102586" Nov 28 11:23:24 crc kubenswrapper[4772]: I1128 11:23:24.755028 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.134329444 podStartE2EDuration="32.755008695s" podCreationTimestamp="2025-11-28 11:22:52 +0000 UTC" firstStartedPulling="2025-11-28 11:22:54.17622715 +0000 UTC m=+972.499470377" lastFinishedPulling="2025-11-28 11:23:18.796906401 +0000 UTC m=+997.120149628" observedRunningTime="2025-11-28 11:23:24.741912498 +0000 UTC m=+1003.065155735" watchObservedRunningTime="2025-11-28 11:23:24.755008695 +0000 UTC m=+1003.078251912" Nov 28 11:23:25 crc kubenswrapper[4772]: I1128 11:23:25.709196 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gjbm2" event={"ID":"dbdbf695-f81d-431b-8330-6745cdbf9ab1","Type":"ContainerStarted","Data":"65ed48eeffc7c716ab1317606b789e2e9d1b19c4c5775a9ea358a07382436f0f"} Nov 28 11:23:26 crc kubenswrapper[4772]: I1128 11:23:26.723523 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gjbm2" event={"ID":"dbdbf695-f81d-431b-8330-6745cdbf9ab1","Type":"ContainerStarted","Data":"85ca6bc8841399fd8ab5b0c32e26bc988a41c10d3a337214e25cb8a842fd0e1f"} Nov 28 11:23:26 crc kubenswrapper[4772]: I1128 11:23:26.723679 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:26 crc kubenswrapper[4772]: I1128 11:23:26.723758 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:26 crc kubenswrapper[4772]: I1128 11:23:26.806724 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 11:23:26 crc kubenswrapper[4772]: I1128 11:23:26.830519 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-gjbm2" podStartSLOduration=23.464667289 podStartE2EDuration="26.830491817s" podCreationTimestamp="2025-11-28 11:23:00 +0000 UTC" firstStartedPulling="2025-11-28 11:23:20.220971547 +0000 UTC m=+998.544214774" lastFinishedPulling="2025-11-28 11:23:23.586796055 +0000 UTC m=+1001.910039302" observedRunningTime="2025-11-28 11:23:26.744468445 +0000 UTC m=+1005.067711692" watchObservedRunningTime="2025-11-28 11:23:26.830491817 +0000 UTC m=+1005.153735044" Nov 28 11:23:28 crc kubenswrapper[4772]: I1128 11:23:28.741821 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"934fc42a-8a76-4b95-9ad0-5e13fa47d1cb","Type":"ContainerStarted","Data":"ccfd80ad89133db9dd24e1755736b745bdc0dcd7f6968163bf96e439bfb39eda"} Nov 28 11:23:28 crc kubenswrapper[4772]: I1128 11:23:28.744228 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ec35ef2e-e0de-4d42-9ab0-35033d549ac9","Type":"ContainerStarted","Data":"c33191bc33ea5283c4511f94126049d0acbd3deafee00f22e4259270d0788b56"} Nov 28 11:23:28 crc kubenswrapper[4772]: I1128 11:23:28.749855 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:28 crc kubenswrapper[4772]: I1128 11:23:28.772404 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=21.705567992 podStartE2EDuration="29.772389844s" podCreationTimestamp="2025-11-28 11:22:59 +0000 UTC" firstStartedPulling="2025-11-28 11:23:19.647162152 +0000 UTC m=+997.970405379" lastFinishedPulling="2025-11-28 11:23:27.713984004 +0000 UTC m=+1006.037227231" observedRunningTime="2025-11-28 11:23:28.765455335 +0000 UTC m=+1007.088698592" watchObservedRunningTime="2025-11-28 11:23:28.772389844 +0000 UTC m=+1007.095633071" Nov 28 11:23:28 crc kubenswrapper[4772]: I1128 11:23:28.785469 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.56554355 podStartE2EDuration="25.78545421s" podCreationTimestamp="2025-11-28 11:23:03 +0000 UTC" firstStartedPulling="2025-11-28 11:23:19.519673811 +0000 UTC m=+997.842917038" lastFinishedPulling="2025-11-28 11:23:27.739584471 +0000 UTC m=+1006.062827698" observedRunningTime="2025-11-28 11:23:28.784113193 +0000 UTC m=+1007.107356430" watchObservedRunningTime="2025-11-28 11:23:28.78545421 +0000 UTC m=+1007.108697437" Nov 28 11:23:28 crc kubenswrapper[4772]: I1128 11:23:28.814065 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:29 crc kubenswrapper[4772]: I1128 11:23:29.749596 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:29 crc kubenswrapper[4772]: I1128 11:23:29.826622 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.197584 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-nfhhm"] Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.214183 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-bf8rd"] Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.215713 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.217668 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.258094 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-bf8rd"] Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.325497 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-bf8rd\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.325631 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-config\") pod \"dnsmasq-dns-6bc7876d45-bf8rd\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.325665 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-bf8rd\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.325691 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dngx\" (UniqueName: \"kubernetes.io/projected/58d95407-3b7a-4ada-95c4-dcf358e19f01-kube-api-access-7dngx\") pod \"dnsmasq-dns-6bc7876d45-bf8rd\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.337420 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-9xz5n"] Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.338704 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.342329 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9xz5n"] Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.351971 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.429699 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dngx\" (UniqueName: \"kubernetes.io/projected/58d95407-3b7a-4ada-95c4-dcf358e19f01-kube-api-access-7dngx\") pod \"dnsmasq-dns-6bc7876d45-bf8rd\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.429749 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620b05d3-f04b-4d52-b3c3-039dcc751696-config\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.429780 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/620b05d3-f04b-4d52-b3c3-039dcc751696-ovn-rundir\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.429803 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620b05d3-f04b-4d52-b3c3-039dcc751696-combined-ca-bundle\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.429851 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrrxk\" (UniqueName: \"kubernetes.io/projected/620b05d3-f04b-4d52-b3c3-039dcc751696-kube-api-access-hrrxk\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.429890 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-bf8rd\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.429910 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/620b05d3-f04b-4d52-b3c3-039dcc751696-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.429932 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/620b05d3-f04b-4d52-b3c3-039dcc751696-ovs-rundir\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.429955 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-config\") pod \"dnsmasq-dns-6bc7876d45-bf8rd\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.429979 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-bf8rd\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.430993 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-bf8rd\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.431856 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-bf8rd\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.436010 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-config\") pod \"dnsmasq-dns-6bc7876d45-bf8rd\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.458080 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dngx\" (UniqueName: \"kubernetes.io/projected/58d95407-3b7a-4ada-95c4-dcf358e19f01-kube-api-access-7dngx\") pod \"dnsmasq-dns-6bc7876d45-bf8rd\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.531386 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrrxk\" (UniqueName: \"kubernetes.io/projected/620b05d3-f04b-4d52-b3c3-039dcc751696-kube-api-access-hrrxk\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.531452 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/620b05d3-f04b-4d52-b3c3-039dcc751696-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.531479 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/620b05d3-f04b-4d52-b3c3-039dcc751696-ovs-rundir\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.531525 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620b05d3-f04b-4d52-b3c3-039dcc751696-config\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.531549 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/620b05d3-f04b-4d52-b3c3-039dcc751696-ovn-rundir\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.531580 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620b05d3-f04b-4d52-b3c3-039dcc751696-combined-ca-bundle\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.532429 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/620b05d3-f04b-4d52-b3c3-039dcc751696-ovs-rundir\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.533502 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/620b05d3-f04b-4d52-b3c3-039dcc751696-ovn-rundir\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.533692 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620b05d3-f04b-4d52-b3c3-039dcc751696-config\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.537741 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620b05d3-f04b-4d52-b3c3-039dcc751696-combined-ca-bundle\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.549859 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.551424 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/620b05d3-f04b-4d52-b3c3-039dcc751696-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.555375 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrrxk\" (UniqueName: \"kubernetes.io/projected/620b05d3-f04b-4d52-b3c3-039dcc751696-kube-api-access-hrrxk\") pod \"ovn-controller-metrics-9xz5n\" (UID: \"620b05d3-f04b-4d52-b3c3-039dcc751696\") " pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.640035 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mchqv"] Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.669122 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9xz5n" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.688720 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-s7mtd"] Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.691209 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.693080 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.715935 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-s7mtd"] Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.816539 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.817405 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.831256 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.836402 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-config\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.836454 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqwlq\" (UniqueName: \"kubernetes.io/projected/78bd45e2-bdb4-4286-905f-9a5470c9728b-kube-api-access-wqwlq\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.836584 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.836606 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.836664 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-dns-svc\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.880537 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.941491 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c022176-398c-4a4f-af82-afd5b81847b2-config\") pod \"5c022176-398c-4a4f-af82-afd5b81847b2\" (UID: \"5c022176-398c-4a4f-af82-afd5b81847b2\") " Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.941710 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg7lq\" (UniqueName: \"kubernetes.io/projected/5c022176-398c-4a4f-af82-afd5b81847b2-kube-api-access-mg7lq\") pod \"5c022176-398c-4a4f-af82-afd5b81847b2\" (UID: \"5c022176-398c-4a4f-af82-afd5b81847b2\") " Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.941826 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c022176-398c-4a4f-af82-afd5b81847b2-dns-svc\") pod \"5c022176-398c-4a4f-af82-afd5b81847b2\" (UID: \"5c022176-398c-4a4f-af82-afd5b81847b2\") " Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.942626 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c022176-398c-4a4f-af82-afd5b81847b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c022176-398c-4a4f-af82-afd5b81847b2" (UID: "5c022176-398c-4a4f-af82-afd5b81847b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.943792 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c022176-398c-4a4f-af82-afd5b81847b2-config" (OuterVolumeSpecName: "config") pod "5c022176-398c-4a4f-af82-afd5b81847b2" (UID: "5c022176-398c-4a4f-af82-afd5b81847b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.944025 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.944073 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.944577 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-dns-svc\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.944663 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-config\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.944734 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqwlq\" (UniqueName: \"kubernetes.io/projected/78bd45e2-bdb4-4286-905f-9a5470c9728b-kube-api-access-wqwlq\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.944882 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c022176-398c-4a4f-af82-afd5b81847b2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.944895 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c022176-398c-4a4f-af82-afd5b81847b2-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.945056 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.947015 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-dns-svc\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.947622 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-config\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.960411 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c022176-398c-4a4f-af82-afd5b81847b2-kube-api-access-mg7lq" (OuterVolumeSpecName: "kube-api-access-mg7lq") pod "5c022176-398c-4a4f-af82-afd5b81847b2" (UID: "5c022176-398c-4a4f-af82-afd5b81847b2"). InnerVolumeSpecName "kube-api-access-mg7lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.961014 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.977148 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqwlq\" (UniqueName: \"kubernetes.io/projected/78bd45e2-bdb4-4286-905f-9a5470c9728b-kube-api-access-wqwlq\") pod \"dnsmasq-dns-8554648995-s7mtd\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:30 crc kubenswrapper[4772]: W1128 11:23:30.977317 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58d95407_3b7a_4ada_95c4_dcf358e19f01.slice/crio-2f1b220bc6a9302439fc238c7a2849f0f1eccaabc833e64ab84ec9c0fe261073 WatchSource:0}: Error finding container 2f1b220bc6a9302439fc238c7a2849f0f1eccaabc833e64ab84ec9c0fe261073: Status 404 returned error can't find the container with id 2f1b220bc6a9302439fc238c7a2849f0f1eccaabc833e64ab84ec9c0fe261073 Nov 28 11:23:30 crc kubenswrapper[4772]: I1128 11:23:30.981437 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-bf8rd"] Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.044388 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.046276 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg7lq\" (UniqueName: \"kubernetes.io/projected/5c022176-398c-4a4f-af82-afd5b81847b2-kube-api-access-mg7lq\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.252595 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.359675 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-config\") pod \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\" (UID: \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\") " Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.360483 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-config" (OuterVolumeSpecName: "config") pod "6ca2b953-45e9-4a19-9dda-8c76d7eeb213" (UID: "6ca2b953-45e9-4a19-9dda-8c76d7eeb213"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.360685 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcpfh\" (UniqueName: \"kubernetes.io/projected/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-kube-api-access-fcpfh\") pod \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\" (UID: \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\") " Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.361931 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-dns-svc\") pod \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\" (UID: \"6ca2b953-45e9-4a19-9dda-8c76d7eeb213\") " Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.362460 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ca2b953-45e9-4a19-9dda-8c76d7eeb213" (UID: "6ca2b953-45e9-4a19-9dda-8c76d7eeb213"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.362694 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.362713 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.363235 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-s7mtd"] Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.367885 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-kube-api-access-fcpfh" (OuterVolumeSpecName: "kube-api-access-fcpfh") pod "6ca2b953-45e9-4a19-9dda-8c76d7eeb213" (UID: "6ca2b953-45e9-4a19-9dda-8c76d7eeb213"). InnerVolumeSpecName "kube-api-access-fcpfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:31 crc kubenswrapper[4772]: W1128 11:23:31.386844 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod620b05d3_f04b_4d52_b3c3_039dcc751696.slice/crio-532142e5099ac4902cfd8427c4fb7b75029a7352516f45618e5528ff8756b622 WatchSource:0}: Error finding container 532142e5099ac4902cfd8427c4fb7b75029a7352516f45618e5528ff8756b622: Status 404 returned error can't find the container with id 532142e5099ac4902cfd8427c4fb7b75029a7352516f45618e5528ff8756b622 Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.388096 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9xz5n"] Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.464713 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcpfh\" (UniqueName: \"kubernetes.io/projected/6ca2b953-45e9-4a19-9dda-8c76d7eeb213-kube-api-access-fcpfh\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.792780 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9xz5n" event={"ID":"620b05d3-f04b-4d52-b3c3-039dcc751696","Type":"ContainerStarted","Data":"532142e5099ac4902cfd8427c4fb7b75029a7352516f45618e5528ff8756b622"} Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.796955 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" event={"ID":"5c022176-398c-4a4f-af82-afd5b81847b2","Type":"ContainerDied","Data":"0d638e68cefaaea189d6fbe1849fbd0fb609c4d2e51e12844db04ff84e9c7704"} Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.797002 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-nfhhm" Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.800327 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" event={"ID":"58d95407-3b7a-4ada-95c4-dcf358e19f01","Type":"ContainerStarted","Data":"2f1b220bc6a9302439fc238c7a2849f0f1eccaabc833e64ab84ec9c0fe261073"} Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.801751 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" event={"ID":"6ca2b953-45e9-4a19-9dda-8c76d7eeb213","Type":"ContainerDied","Data":"355388bcbce1ba5ca6ec7dc94c4a46a70f65039fa3aab39ecf552ad2742be80c"} Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.801930 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mchqv" Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.815487 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-s7mtd" event={"ID":"78bd45e2-bdb4-4286-905f-9a5470c9728b","Type":"ContainerStarted","Data":"0a15b5aa68bb177058f3fdad695dae844fbc444f4a2f94a4092c6b05bf8bbefb"} Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.878965 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.895298 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-nfhhm"] Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.910773 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-nfhhm"] Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.972932 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mchqv"] Nov 28 11:23:31 crc kubenswrapper[4772]: I1128 11:23:31.983698 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mchqv"] Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.024196 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c022176-398c-4a4f-af82-afd5b81847b2" path="/var/lib/kubelet/pods/5c022176-398c-4a4f-af82-afd5b81847b2/volumes" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.024936 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca2b953-45e9-4a19-9dda-8c76d7eeb213" path="/var/lib/kubelet/pods/6ca2b953-45e9-4a19-9dda-8c76d7eeb213/volumes" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.208461 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.209944 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.212506 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-krv92" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.216759 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.216828 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.222371 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.226930 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.286928 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-config\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.286978 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.287000 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.287050 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.287291 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-scripts\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.287648 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.287790 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpwqv\" (UniqueName: \"kubernetes.io/projected/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-kube-api-access-vpwqv\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.391150 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.391304 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpwqv\" (UniqueName: \"kubernetes.io/projected/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-kube-api-access-vpwqv\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.391495 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-config\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.391547 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.391588 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.391640 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.391692 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-scripts\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.392546 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.394014 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-scripts\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.394810 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-config\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.399288 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.399510 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.403565 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.412891 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpwqv\" (UniqueName: \"kubernetes.io/projected/6a9655ce-0d05-449e-899d-6cbaa25cd5e9-kube-api-access-vpwqv\") pod \"ovn-northd-0\" (UID: \"6a9655ce-0d05-449e-899d-6cbaa25cd5e9\") " pod="openstack/ovn-northd-0" Nov 28 11:23:32 crc kubenswrapper[4772]: I1128 11:23:32.534674 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 11:23:33 crc kubenswrapper[4772]: I1128 11:23:33.023969 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 11:23:33 crc kubenswrapper[4772]: W1128 11:23:33.032673 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a9655ce_0d05_449e_899d_6cbaa25cd5e9.slice/crio-f69196da63ecd73aec958db5862cbbbce441805d45f0beeeac5b163d0bb5bf5e WatchSource:0}: Error finding container f69196da63ecd73aec958db5862cbbbce441805d45f0beeeac5b163d0bb5bf5e: Status 404 returned error can't find the container with id f69196da63ecd73aec958db5862cbbbce441805d45f0beeeac5b163d0bb5bf5e Nov 28 11:23:33 crc kubenswrapper[4772]: I1128 11:23:33.509821 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 28 11:23:33 crc kubenswrapper[4772]: I1128 11:23:33.509869 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 28 11:23:33 crc kubenswrapper[4772]: I1128 11:23:33.611951 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 28 11:23:33 crc kubenswrapper[4772]: I1128 11:23:33.836042 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6a9655ce-0d05-449e-899d-6cbaa25cd5e9","Type":"ContainerStarted","Data":"f69196da63ecd73aec958db5862cbbbce441805d45f0beeeac5b163d0bb5bf5e"} Nov 28 11:23:33 crc kubenswrapper[4772]: I1128 11:23:33.930941 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.714729 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74a7-account-create-update-q7d68"] Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.717250 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74a7-account-create-update-q7d68" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.727142 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.765757 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fncq\" (UniqueName: \"kubernetes.io/projected/76af22b2-2887-4353-9228-25c97bd23c28-kube-api-access-2fncq\") pod \"keystone-74a7-account-create-update-q7d68\" (UID: \"76af22b2-2887-4353-9228-25c97bd23c28\") " pod="openstack/keystone-74a7-account-create-update-q7d68" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.765891 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76af22b2-2887-4353-9228-25c97bd23c28-operator-scripts\") pod \"keystone-74a7-account-create-update-q7d68\" (UID: \"76af22b2-2887-4353-9228-25c97bd23c28\") " pod="openstack/keystone-74a7-account-create-update-q7d68" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.772310 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74a7-account-create-update-q7d68"] Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.799776 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zmxkp"] Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.801800 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zmxkp" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.812302 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zmxkp"] Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.845126 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fbc92675-93c6-4d66-afb0-d83636cbf853","Type":"ContainerStarted","Data":"581a1454235320e6919daaf1c1edfc482ed58c0a82e6e22beebb44d7de7db398"} Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.868675 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fncq\" (UniqueName: \"kubernetes.io/projected/76af22b2-2887-4353-9228-25c97bd23c28-kube-api-access-2fncq\") pod \"keystone-74a7-account-create-update-q7d68\" (UID: \"76af22b2-2887-4353-9228-25c97bd23c28\") " pod="openstack/keystone-74a7-account-create-update-q7d68" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.868845 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76af22b2-2887-4353-9228-25c97bd23c28-operator-scripts\") pod \"keystone-74a7-account-create-update-q7d68\" (UID: \"76af22b2-2887-4353-9228-25c97bd23c28\") " pod="openstack/keystone-74a7-account-create-update-q7d68" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.868929 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pbbf\" (UniqueName: \"kubernetes.io/projected/6c32df24-75af-46c6-bc3a-defed81bd9e0-kube-api-access-7pbbf\") pod \"keystone-db-create-zmxkp\" (UID: \"6c32df24-75af-46c6-bc3a-defed81bd9e0\") " pod="openstack/keystone-db-create-zmxkp" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.868967 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c32df24-75af-46c6-bc3a-defed81bd9e0-operator-scripts\") pod \"keystone-db-create-zmxkp\" (UID: \"6c32df24-75af-46c6-bc3a-defed81bd9e0\") " pod="openstack/keystone-db-create-zmxkp" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.869807 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76af22b2-2887-4353-9228-25c97bd23c28-operator-scripts\") pod \"keystone-74a7-account-create-update-q7d68\" (UID: \"76af22b2-2887-4353-9228-25c97bd23c28\") " pod="openstack/keystone-74a7-account-create-update-q7d68" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.896685 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fncq\" (UniqueName: \"kubernetes.io/projected/76af22b2-2887-4353-9228-25c97bd23c28-kube-api-access-2fncq\") pod \"keystone-74a7-account-create-update-q7d68\" (UID: \"76af22b2-2887-4353-9228-25c97bd23c28\") " pod="openstack/keystone-74a7-account-create-update-q7d68" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.971668 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pbbf\" (UniqueName: \"kubernetes.io/projected/6c32df24-75af-46c6-bc3a-defed81bd9e0-kube-api-access-7pbbf\") pod \"keystone-db-create-zmxkp\" (UID: \"6c32df24-75af-46c6-bc3a-defed81bd9e0\") " pod="openstack/keystone-db-create-zmxkp" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.971746 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c32df24-75af-46c6-bc3a-defed81bd9e0-operator-scripts\") pod \"keystone-db-create-zmxkp\" (UID: \"6c32df24-75af-46c6-bc3a-defed81bd9e0\") " pod="openstack/keystone-db-create-zmxkp" Nov 28 11:23:34 crc kubenswrapper[4772]: I1128 11:23:34.972657 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c32df24-75af-46c6-bc3a-defed81bd9e0-operator-scripts\") pod \"keystone-db-create-zmxkp\" (UID: \"6c32df24-75af-46c6-bc3a-defed81bd9e0\") " pod="openstack/keystone-db-create-zmxkp" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.010262 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pbbf\" (UniqueName: \"kubernetes.io/projected/6c32df24-75af-46c6-bc3a-defed81bd9e0-kube-api-access-7pbbf\") pod \"keystone-db-create-zmxkp\" (UID: \"6c32df24-75af-46c6-bc3a-defed81bd9e0\") " pod="openstack/keystone-db-create-zmxkp" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.032633 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-kjrfj"] Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.034149 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kjrfj" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.040873 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kjrfj"] Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.071490 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74a7-account-create-update-q7d68" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.072581 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b737be1-c629-4f60-9fb5-6102b6ab5cc0-operator-scripts\") pod \"placement-db-create-kjrfj\" (UID: \"5b737be1-c629-4f60-9fb5-6102b6ab5cc0\") " pod="openstack/placement-db-create-kjrfj" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.072702 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfgww\" (UniqueName: \"kubernetes.io/projected/5b737be1-c629-4f60-9fb5-6102b6ab5cc0-kube-api-access-xfgww\") pod \"placement-db-create-kjrfj\" (UID: \"5b737be1-c629-4f60-9fb5-6102b6ab5cc0\") " pod="openstack/placement-db-create-kjrfj" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.122414 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zmxkp" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.173995 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfgww\" (UniqueName: \"kubernetes.io/projected/5b737be1-c629-4f60-9fb5-6102b6ab5cc0-kube-api-access-xfgww\") pod \"placement-db-create-kjrfj\" (UID: \"5b737be1-c629-4f60-9fb5-6102b6ab5cc0\") " pod="openstack/placement-db-create-kjrfj" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.174074 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b737be1-c629-4f60-9fb5-6102b6ab5cc0-operator-scripts\") pod \"placement-db-create-kjrfj\" (UID: \"5b737be1-c629-4f60-9fb5-6102b6ab5cc0\") " pod="openstack/placement-db-create-kjrfj" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.174928 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c5b4-account-create-update-n4f5g"] Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.175173 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b737be1-c629-4f60-9fb5-6102b6ab5cc0-operator-scripts\") pod \"placement-db-create-kjrfj\" (UID: \"5b737be1-c629-4f60-9fb5-6102b6ab5cc0\") " pod="openstack/placement-db-create-kjrfj" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.176854 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c5b4-account-create-update-n4f5g" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.180107 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.197826 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfgww\" (UniqueName: \"kubernetes.io/projected/5b737be1-c629-4f60-9fb5-6102b6ab5cc0-kube-api-access-xfgww\") pod \"placement-db-create-kjrfj\" (UID: \"5b737be1-c629-4f60-9fb5-6102b6ab5cc0\") " pod="openstack/placement-db-create-kjrfj" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.208436 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c5b4-account-create-update-n4f5g"] Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.278002 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2a5e9d0-5893-460f-8f77-f896d57f515c-operator-scripts\") pod \"placement-c5b4-account-create-update-n4f5g\" (UID: \"b2a5e9d0-5893-460f-8f77-f896d57f515c\") " pod="openstack/placement-c5b4-account-create-update-n4f5g" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.278439 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q268d\" (UniqueName: \"kubernetes.io/projected/b2a5e9d0-5893-460f-8f77-f896d57f515c-kube-api-access-q268d\") pod \"placement-c5b4-account-create-update-n4f5g\" (UID: \"b2a5e9d0-5893-460f-8f77-f896d57f515c\") " pod="openstack/placement-c5b4-account-create-update-n4f5g" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.380180 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q268d\" (UniqueName: \"kubernetes.io/projected/b2a5e9d0-5893-460f-8f77-f896d57f515c-kube-api-access-q268d\") pod \"placement-c5b4-account-create-update-n4f5g\" (UID: \"b2a5e9d0-5893-460f-8f77-f896d57f515c\") " pod="openstack/placement-c5b4-account-create-update-n4f5g" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.380334 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2a5e9d0-5893-460f-8f77-f896d57f515c-operator-scripts\") pod \"placement-c5b4-account-create-update-n4f5g\" (UID: \"b2a5e9d0-5893-460f-8f77-f896d57f515c\") " pod="openstack/placement-c5b4-account-create-update-n4f5g" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.381051 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2a5e9d0-5893-460f-8f77-f896d57f515c-operator-scripts\") pod \"placement-c5b4-account-create-update-n4f5g\" (UID: \"b2a5e9d0-5893-460f-8f77-f896d57f515c\") " pod="openstack/placement-c5b4-account-create-update-n4f5g" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.398559 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q268d\" (UniqueName: \"kubernetes.io/projected/b2a5e9d0-5893-460f-8f77-f896d57f515c-kube-api-access-q268d\") pod \"placement-c5b4-account-create-update-n4f5g\" (UID: \"b2a5e9d0-5893-460f-8f77-f896d57f515c\") " pod="openstack/placement-c5b4-account-create-update-n4f5g" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.425824 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kjrfj" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.568965 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c5b4-account-create-update-n4f5g" Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.642722 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zmxkp"] Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.681456 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kjrfj"] Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.696264 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74a7-account-create-update-q7d68"] Nov 28 11:23:35 crc kubenswrapper[4772]: W1128 11:23:35.697944 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b737be1_c629_4f60_9fb5_6102b6ab5cc0.slice/crio-4181ccfe5bb546be4eca19b260f12607a631e23cb68937e53f70473020590691 WatchSource:0}: Error finding container 4181ccfe5bb546be4eca19b260f12607a631e23cb68937e53f70473020590691: Status 404 returned error can't find the container with id 4181ccfe5bb546be4eca19b260f12607a631e23cb68937e53f70473020590691 Nov 28 11:23:35 crc kubenswrapper[4772]: W1128 11:23:35.702770 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76af22b2_2887_4353_9228_25c97bd23c28.slice/crio-cf48ceff44b656d7f2ecada787a79695a8158db05b5dcfe58934f0eefad34345 WatchSource:0}: Error finding container cf48ceff44b656d7f2ecada787a79695a8158db05b5dcfe58934f0eefad34345: Status 404 returned error can't find the container with id cf48ceff44b656d7f2ecada787a79695a8158db05b5dcfe58934f0eefad34345 Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.855672 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74a7-account-create-update-q7d68" event={"ID":"76af22b2-2887-4353-9228-25c97bd23c28","Type":"ContainerStarted","Data":"cf48ceff44b656d7f2ecada787a79695a8158db05b5dcfe58934f0eefad34345"} Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.858664 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zmxkp" event={"ID":"6c32df24-75af-46c6-bc3a-defed81bd9e0","Type":"ContainerStarted","Data":"69c8363a130c6fc83e06eb1924d181a95023c219e3d75e5af84551831844e442"} Nov 28 11:23:35 crc kubenswrapper[4772]: I1128 11:23:35.861060 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kjrfj" event={"ID":"5b737be1-c629-4f60-9fb5-6102b6ab5cc0","Type":"ContainerStarted","Data":"4181ccfe5bb546be4eca19b260f12607a631e23cb68937e53f70473020590691"} Nov 28 11:23:36 crc kubenswrapper[4772]: I1128 11:23:36.008757 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c5b4-account-create-update-n4f5g"] Nov 28 11:23:36 crc kubenswrapper[4772]: W1128 11:23:36.019231 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2a5e9d0_5893_460f_8f77_f896d57f515c.slice/crio-453d9d5f0389d72ba1646c269d97f5a8e4a2f4d4ff79e7ac961c5618bc849d6c WatchSource:0}: Error finding container 453d9d5f0389d72ba1646c269d97f5a8e4a2f4d4ff79e7ac961c5618bc849d6c: Status 404 returned error can't find the container with id 453d9d5f0389d72ba1646c269d97f5a8e4a2f4d4ff79e7ac961c5618bc849d6c Nov 28 11:23:36 crc kubenswrapper[4772]: I1128 11:23:36.874461 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c5b4-account-create-update-n4f5g" event={"ID":"b2a5e9d0-5893-460f-8f77-f896d57f515c","Type":"ContainerStarted","Data":"453d9d5f0389d72ba1646c269d97f5a8e4a2f4d4ff79e7ac961c5618bc849d6c"} Nov 28 11:23:37 crc kubenswrapper[4772]: I1128 11:23:37.883269 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74a7-account-create-update-q7d68" event={"ID":"76af22b2-2887-4353-9228-25c97bd23c28","Type":"ContainerStarted","Data":"5d55adb86d8d9478012963d820b62558f92c5b502d94d177717c5a090370f602"} Nov 28 11:23:37 crc kubenswrapper[4772]: I1128 11:23:37.885200 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zmxkp" event={"ID":"6c32df24-75af-46c6-bc3a-defed81bd9e0","Type":"ContainerStarted","Data":"c04272903dc408a8ec05e11e76c28ff87917001ad9523fb9c63a5a87d2e269ff"} Nov 28 11:23:37 crc kubenswrapper[4772]: I1128 11:23:37.892425 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kjrfj" event={"ID":"5b737be1-c629-4f60-9fb5-6102b6ab5cc0","Type":"ContainerStarted","Data":"2a2e85fc0331bc2ea8c17dee3836751026aa26d07b3b13b5c68d7a4948b3611a"} Nov 28 11:23:37 crc kubenswrapper[4772]: I1128 11:23:37.911473 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-kjrfj" podStartSLOduration=2.911429903 podStartE2EDuration="2.911429903s" podCreationTimestamp="2025-11-28 11:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:23:37.904529255 +0000 UTC m=+1016.227772492" watchObservedRunningTime="2025-11-28 11:23:37.911429903 +0000 UTC m=+1016.234673130" Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.901560 4772 generic.go:334] "Generic (PLEG): container finished" podID="58d95407-3b7a-4ada-95c4-dcf358e19f01" containerID="f0629b22a328fcd76b28f2d498300fce43990ca58e5af212a4b6e4d817ed3f61" exitCode=0 Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.901611 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" event={"ID":"58d95407-3b7a-4ada-95c4-dcf358e19f01","Type":"ContainerDied","Data":"f0629b22a328fcd76b28f2d498300fce43990ca58e5af212a4b6e4d817ed3f61"} Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.907383 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f9e491cd-f369-412c-9b41-77844ff3057d","Type":"ContainerStarted","Data":"2eef0153e79f1f4c2dd64c455bd5b29839d60cf77f6c7c289c9f6134b9e04b96"} Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.908104 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.908924 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9xz5n" event={"ID":"620b05d3-f04b-4d52-b3c3-039dcc751696","Type":"ContainerStarted","Data":"c99bfd2d21dffc9dd09bf56eb91ef8b832c78d9c97f7ab509c3926d1a24861cb"} Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.910196 4772 generic.go:334] "Generic (PLEG): container finished" podID="b2a5e9d0-5893-460f-8f77-f896d57f515c" containerID="a9a14535067a89aabae9a07a65f4e6f0e19c4b7912fec3c49c31657810166d15" exitCode=0 Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.910297 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c5b4-account-create-update-n4f5g" event={"ID":"b2a5e9d0-5893-460f-8f77-f896d57f515c","Type":"ContainerDied","Data":"a9a14535067a89aabae9a07a65f4e6f0e19c4b7912fec3c49c31657810166d15"} Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.917272 4772 generic.go:334] "Generic (PLEG): container finished" podID="76af22b2-2887-4353-9228-25c97bd23c28" containerID="5d55adb86d8d9478012963d820b62558f92c5b502d94d177717c5a090370f602" exitCode=0 Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.917393 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74a7-account-create-update-q7d68" event={"ID":"76af22b2-2887-4353-9228-25c97bd23c28","Type":"ContainerDied","Data":"5d55adb86d8d9478012963d820b62558f92c5b502d94d177717c5a090370f602"} Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.919949 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6a9655ce-0d05-449e-899d-6cbaa25cd5e9","Type":"ContainerStarted","Data":"c1b321e3c00ea6cebff3395e6e6d95f851f10aa9106fa788ae0a1eec5ea65c86"} Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.923239 4772 generic.go:334] "Generic (PLEG): container finished" podID="6c32df24-75af-46c6-bc3a-defed81bd9e0" containerID="c04272903dc408a8ec05e11e76c28ff87917001ad9523fb9c63a5a87d2e269ff" exitCode=0 Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.923317 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zmxkp" event={"ID":"6c32df24-75af-46c6-bc3a-defed81bd9e0","Type":"ContainerDied","Data":"c04272903dc408a8ec05e11e76c28ff87917001ad9523fb9c63a5a87d2e269ff"} Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.926162 4772 generic.go:334] "Generic (PLEG): container finished" podID="5b737be1-c629-4f60-9fb5-6102b6ab5cc0" containerID="2a2e85fc0331bc2ea8c17dee3836751026aa26d07b3b13b5c68d7a4948b3611a" exitCode=0 Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.926580 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kjrfj" event={"ID":"5b737be1-c629-4f60-9fb5-6102b6ab5cc0","Type":"ContainerDied","Data":"2a2e85fc0331bc2ea8c17dee3836751026aa26d07b3b13b5c68d7a4948b3611a"} Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.928156 4772 generic.go:334] "Generic (PLEG): container finished" podID="78bd45e2-bdb4-4286-905f-9a5470c9728b" containerID="8c7dab3be7f99b1bea4473f041f4b5bf1f1c9b50df90f5f62c273dfb9b7ce25c" exitCode=0 Nov 28 11:23:38 crc kubenswrapper[4772]: I1128 11:23:38.929585 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-s7mtd" event={"ID":"78bd45e2-bdb4-4286-905f-9a5470c9728b","Type":"ContainerDied","Data":"8c7dab3be7f99b1bea4473f041f4b5bf1f1c9b50df90f5f62c273dfb9b7ce25c"} Nov 28 11:23:39 crc kubenswrapper[4772]: I1128 11:23:39.184954 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-9xz5n" podStartSLOduration=9.184925719 podStartE2EDuration="9.184925719s" podCreationTimestamp="2025-11-28 11:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:23:39.167735791 +0000 UTC m=+1017.490979018" watchObservedRunningTime="2025-11-28 11:23:39.184925719 +0000 UTC m=+1017.508168946" Nov 28 11:23:39 crc kubenswrapper[4772]: I1128 11:23:39.207717 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.062971274 podStartE2EDuration="45.207695659s" podCreationTimestamp="2025-11-28 11:22:54 +0000 UTC" firstStartedPulling="2025-11-28 11:22:55.888999728 +0000 UTC m=+974.212242955" lastFinishedPulling="2025-11-28 11:23:38.033724113 +0000 UTC m=+1016.356967340" observedRunningTime="2025-11-28 11:23:39.19560947 +0000 UTC m=+1017.518852697" watchObservedRunningTime="2025-11-28 11:23:39.207695659 +0000 UTC m=+1017.530938886" Nov 28 11:23:39 crc kubenswrapper[4772]: I1128 11:23:39.945400 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6a9655ce-0d05-449e-899d-6cbaa25cd5e9","Type":"ContainerStarted","Data":"7dcfc7d75b7e5e4c54fe9df742c771df88fb6f777c22b343a2d1af542a9363ce"} Nov 28 11:23:39 crc kubenswrapper[4772]: I1128 11:23:39.945916 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 28 11:23:39 crc kubenswrapper[4772]: I1128 11:23:39.952312 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" event={"ID":"58d95407-3b7a-4ada-95c4-dcf358e19f01","Type":"ContainerStarted","Data":"325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976"} Nov 28 11:23:39 crc kubenswrapper[4772]: I1128 11:23:39.952933 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:39 crc kubenswrapper[4772]: I1128 11:23:39.954519 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52b2f98f-d36f-4798-9903-1f75498cdb5b","Type":"ContainerStarted","Data":"820b521e8723c89ca4b5ab79f599eaa92d05cdf27d3674e74608ff1f326ac44e"} Nov 28 11:23:39 crc kubenswrapper[4772]: I1128 11:23:39.957238 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-s7mtd" event={"ID":"78bd45e2-bdb4-4286-905f-9a5470c9728b","Type":"ContainerStarted","Data":"1741d699a20ff41779fe5fabdebfa740333826a7c09c7a922a146dd53669df46"} Nov 28 11:23:39 crc kubenswrapper[4772]: I1128 11:23:39.957437 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:39 crc kubenswrapper[4772]: I1128 11:23:39.959696 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4d684e5b-88f5-4004-a176-22ae480daaa6","Type":"ContainerStarted","Data":"ee956eabea4b4338ff07057f9b20ad599a52188d715ed812aea06dde00ec9f7e"} Nov 28 11:23:39 crc kubenswrapper[4772]: I1128 11:23:39.985622 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.38167606 podStartE2EDuration="7.985590501s" podCreationTimestamp="2025-11-28 11:23:32 +0000 UTC" firstStartedPulling="2025-11-28 11:23:33.034341523 +0000 UTC m=+1011.357584750" lastFinishedPulling="2025-11-28 11:23:38.638255964 +0000 UTC m=+1016.961499191" observedRunningTime="2025-11-28 11:23:39.983500924 +0000 UTC m=+1018.306744181" watchObservedRunningTime="2025-11-28 11:23:39.985590501 +0000 UTC m=+1018.308833738" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.062800 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-s7mtd" podStartSLOduration=3.446117516 podStartE2EDuration="10.062779003s" podCreationTimestamp="2025-11-28 11:23:30 +0000 UTC" firstStartedPulling="2025-11-28 11:23:31.368027601 +0000 UTC m=+1009.691270828" lastFinishedPulling="2025-11-28 11:23:37.984689088 +0000 UTC m=+1016.307932315" observedRunningTime="2025-11-28 11:23:40.054031235 +0000 UTC m=+1018.377274512" watchObservedRunningTime="2025-11-28 11:23:40.062779003 +0000 UTC m=+1018.386022230" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.508875 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kjrfj" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.515269 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74a7-account-create-update-q7d68" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.519963 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zmxkp" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.525940 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c5b4-account-create-update-n4f5g" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.534006 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" podStartSLOduration=3.532715924 podStartE2EDuration="10.533989933s" podCreationTimestamp="2025-11-28 11:23:30 +0000 UTC" firstStartedPulling="2025-11-28 11:23:30.982771541 +0000 UTC m=+1009.306014768" lastFinishedPulling="2025-11-28 11:23:37.98404555 +0000 UTC m=+1016.307288777" observedRunningTime="2025-11-28 11:23:40.120214307 +0000 UTC m=+1018.443457534" watchObservedRunningTime="2025-11-28 11:23:40.533989933 +0000 UTC m=+1018.857233160" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.596944 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfgww\" (UniqueName: \"kubernetes.io/projected/5b737be1-c629-4f60-9fb5-6102b6ab5cc0-kube-api-access-xfgww\") pod \"5b737be1-c629-4f60-9fb5-6102b6ab5cc0\" (UID: \"5b737be1-c629-4f60-9fb5-6102b6ab5cc0\") " Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.597030 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b737be1-c629-4f60-9fb5-6102b6ab5cc0-operator-scripts\") pod \"5b737be1-c629-4f60-9fb5-6102b6ab5cc0\" (UID: \"5b737be1-c629-4f60-9fb5-6102b6ab5cc0\") " Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.597871 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b737be1-c629-4f60-9fb5-6102b6ab5cc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b737be1-c629-4f60-9fb5-6102b6ab5cc0" (UID: "5b737be1-c629-4f60-9fb5-6102b6ab5cc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.597986 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c32df24-75af-46c6-bc3a-defed81bd9e0-operator-scripts\") pod \"6c32df24-75af-46c6-bc3a-defed81bd9e0\" (UID: \"6c32df24-75af-46c6-bc3a-defed81bd9e0\") " Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.598030 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q268d\" (UniqueName: \"kubernetes.io/projected/b2a5e9d0-5893-460f-8f77-f896d57f515c-kube-api-access-q268d\") pod \"b2a5e9d0-5893-460f-8f77-f896d57f515c\" (UID: \"b2a5e9d0-5893-460f-8f77-f896d57f515c\") " Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.598085 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76af22b2-2887-4353-9228-25c97bd23c28-operator-scripts\") pod \"76af22b2-2887-4353-9228-25c97bd23c28\" (UID: \"76af22b2-2887-4353-9228-25c97bd23c28\") " Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.598129 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2a5e9d0-5893-460f-8f77-f896d57f515c-operator-scripts\") pod \"b2a5e9d0-5893-460f-8f77-f896d57f515c\" (UID: \"b2a5e9d0-5893-460f-8f77-f896d57f515c\") " Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.598151 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fncq\" (UniqueName: \"kubernetes.io/projected/76af22b2-2887-4353-9228-25c97bd23c28-kube-api-access-2fncq\") pod \"76af22b2-2887-4353-9228-25c97bd23c28\" (UID: \"76af22b2-2887-4353-9228-25c97bd23c28\") " Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.598191 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pbbf\" (UniqueName: \"kubernetes.io/projected/6c32df24-75af-46c6-bc3a-defed81bd9e0-kube-api-access-7pbbf\") pod \"6c32df24-75af-46c6-bc3a-defed81bd9e0\" (UID: \"6c32df24-75af-46c6-bc3a-defed81bd9e0\") " Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.598509 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b737be1-c629-4f60-9fb5-6102b6ab5cc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.599286 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2a5e9d0-5893-460f-8f77-f896d57f515c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2a5e9d0-5893-460f-8f77-f896d57f515c" (UID: "b2a5e9d0-5893-460f-8f77-f896d57f515c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.599526 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76af22b2-2887-4353-9228-25c97bd23c28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76af22b2-2887-4353-9228-25c97bd23c28" (UID: "76af22b2-2887-4353-9228-25c97bd23c28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.599588 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c32df24-75af-46c6-bc3a-defed81bd9e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c32df24-75af-46c6-bc3a-defed81bd9e0" (UID: "6c32df24-75af-46c6-bc3a-defed81bd9e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.606647 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c32df24-75af-46c6-bc3a-defed81bd9e0-kube-api-access-7pbbf" (OuterVolumeSpecName: "kube-api-access-7pbbf") pod "6c32df24-75af-46c6-bc3a-defed81bd9e0" (UID: "6c32df24-75af-46c6-bc3a-defed81bd9e0"). InnerVolumeSpecName "kube-api-access-7pbbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.606835 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b737be1-c629-4f60-9fb5-6102b6ab5cc0-kube-api-access-xfgww" (OuterVolumeSpecName: "kube-api-access-xfgww") pod "5b737be1-c629-4f60-9fb5-6102b6ab5cc0" (UID: "5b737be1-c629-4f60-9fb5-6102b6ab5cc0"). InnerVolumeSpecName "kube-api-access-xfgww". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.611760 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76af22b2-2887-4353-9228-25c97bd23c28-kube-api-access-2fncq" (OuterVolumeSpecName: "kube-api-access-2fncq") pod "76af22b2-2887-4353-9228-25c97bd23c28" (UID: "76af22b2-2887-4353-9228-25c97bd23c28"). InnerVolumeSpecName "kube-api-access-2fncq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.619654 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2a5e9d0-5893-460f-8f77-f896d57f515c-kube-api-access-q268d" (OuterVolumeSpecName: "kube-api-access-q268d") pod "b2a5e9d0-5893-460f-8f77-f896d57f515c" (UID: "b2a5e9d0-5893-460f-8f77-f896d57f515c"). InnerVolumeSpecName "kube-api-access-q268d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.701531 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c32df24-75af-46c6-bc3a-defed81bd9e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.701610 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q268d\" (UniqueName: \"kubernetes.io/projected/b2a5e9d0-5893-460f-8f77-f896d57f515c-kube-api-access-q268d\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.701636 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76af22b2-2887-4353-9228-25c97bd23c28-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.701654 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2a5e9d0-5893-460f-8f77-f896d57f515c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.701674 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fncq\" (UniqueName: \"kubernetes.io/projected/76af22b2-2887-4353-9228-25c97bd23c28-kube-api-access-2fncq\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.701693 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pbbf\" (UniqueName: \"kubernetes.io/projected/6c32df24-75af-46c6-bc3a-defed81bd9e0-kube-api-access-7pbbf\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.701712 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfgww\" (UniqueName: \"kubernetes.io/projected/5b737be1-c629-4f60-9fb5-6102b6ab5cc0-kube-api-access-xfgww\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.978831 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c5b4-account-create-update-n4f5g" event={"ID":"b2a5e9d0-5893-460f-8f77-f896d57f515c","Type":"ContainerDied","Data":"453d9d5f0389d72ba1646c269d97f5a8e4a2f4d4ff79e7ac961c5618bc849d6c"} Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.978893 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="453d9d5f0389d72ba1646c269d97f5a8e4a2f4d4ff79e7ac961c5618bc849d6c" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.978978 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c5b4-account-create-update-n4f5g" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.982398 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74a7-account-create-update-q7d68" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.982549 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74a7-account-create-update-q7d68" event={"ID":"76af22b2-2887-4353-9228-25c97bd23c28","Type":"ContainerDied","Data":"cf48ceff44b656d7f2ecada787a79695a8158db05b5dcfe58934f0eefad34345"} Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.982676 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf48ceff44b656d7f2ecada787a79695a8158db05b5dcfe58934f0eefad34345" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.985476 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zmxkp" event={"ID":"6c32df24-75af-46c6-bc3a-defed81bd9e0","Type":"ContainerDied","Data":"69c8363a130c6fc83e06eb1924d181a95023c219e3d75e5af84551831844e442"} Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.985582 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69c8363a130c6fc83e06eb1924d181a95023c219e3d75e5af84551831844e442" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.985489 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zmxkp" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.988768 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kjrfj" event={"ID":"5b737be1-c629-4f60-9fb5-6102b6ab5cc0","Type":"ContainerDied","Data":"4181ccfe5bb546be4eca19b260f12607a631e23cb68937e53f70473020590691"} Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.988852 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4181ccfe5bb546be4eca19b260f12607a631e23cb68937e53f70473020590691" Nov 28 11:23:40 crc kubenswrapper[4772]: I1128 11:23:40.989051 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kjrfj" Nov 28 11:23:42 crc kubenswrapper[4772]: I1128 11:23:42.182050 4772 generic.go:334] "Generic (PLEG): container finished" podID="fbc92675-93c6-4d66-afb0-d83636cbf853" containerID="581a1454235320e6919daaf1c1edfc482ed58c0a82e6e22beebb44d7de7db398" exitCode=0 Nov 28 11:23:42 crc kubenswrapper[4772]: I1128 11:23:42.203569 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fbc92675-93c6-4d66-afb0-d83636cbf853","Type":"ContainerDied","Data":"581a1454235320e6919daaf1c1edfc482ed58c0a82e6e22beebb44d7de7db398"} Nov 28 11:23:43 crc kubenswrapper[4772]: I1128 11:23:43.199534 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fbc92675-93c6-4d66-afb0-d83636cbf853","Type":"ContainerStarted","Data":"b542c3a3c8207e5380be954488a357cfcf58f94e64babb1b77a4d8d78a01c5d3"} Nov 28 11:23:43 crc kubenswrapper[4772]: I1128 11:23:43.241135 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371986.61367 podStartE2EDuration="50.241105046s" podCreationTimestamp="2025-11-28 11:22:53 +0000 UTC" firstStartedPulling="2025-11-28 11:22:55.583651654 +0000 UTC m=+973.906894881" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:23:43.236211443 +0000 UTC m=+1021.559454700" watchObservedRunningTime="2025-11-28 11:23:43.241105046 +0000 UTC m=+1021.564348313" Nov 28 11:23:44 crc kubenswrapper[4772]: I1128 11:23:44.982026 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 28 11:23:44 crc kubenswrapper[4772]: I1128 11:23:44.982516 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.207077 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.408049 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dgrqq"] Nov 28 11:23:45 crc kubenswrapper[4772]: E1128 11:23:45.408602 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76af22b2-2887-4353-9228-25c97bd23c28" containerName="mariadb-account-create-update" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.408628 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="76af22b2-2887-4353-9228-25c97bd23c28" containerName="mariadb-account-create-update" Nov 28 11:23:45 crc kubenswrapper[4772]: E1128 11:23:45.408647 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b737be1-c629-4f60-9fb5-6102b6ab5cc0" containerName="mariadb-database-create" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.408656 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b737be1-c629-4f60-9fb5-6102b6ab5cc0" containerName="mariadb-database-create" Nov 28 11:23:45 crc kubenswrapper[4772]: E1128 11:23:45.408677 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2a5e9d0-5893-460f-8f77-f896d57f515c" containerName="mariadb-account-create-update" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.408685 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2a5e9d0-5893-460f-8f77-f896d57f515c" containerName="mariadb-account-create-update" Nov 28 11:23:45 crc kubenswrapper[4772]: E1128 11:23:45.408706 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c32df24-75af-46c6-bc3a-defed81bd9e0" containerName="mariadb-database-create" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.408714 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c32df24-75af-46c6-bc3a-defed81bd9e0" containerName="mariadb-database-create" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.408933 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c32df24-75af-46c6-bc3a-defed81bd9e0" containerName="mariadb-database-create" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.408958 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="76af22b2-2887-4353-9228-25c97bd23c28" containerName="mariadb-account-create-update" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.408978 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2a5e9d0-5893-460f-8f77-f896d57f515c" containerName="mariadb-account-create-update" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.408987 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b737be1-c629-4f60-9fb5-6102b6ab5cc0" containerName="mariadb-database-create" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.409947 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dgrqq" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.421084 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dgrqq"] Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.502044 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdhv5\" (UniqueName: \"kubernetes.io/projected/e7c0bb97-5fda-4dad-a178-1b336bf97c74-kube-api-access-fdhv5\") pod \"glance-db-create-dgrqq\" (UID: \"e7c0bb97-5fda-4dad-a178-1b336bf97c74\") " pod="openstack/glance-db-create-dgrqq" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.502179 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7c0bb97-5fda-4dad-a178-1b336bf97c74-operator-scripts\") pod \"glance-db-create-dgrqq\" (UID: \"e7c0bb97-5fda-4dad-a178-1b336bf97c74\") " pod="openstack/glance-db-create-dgrqq" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.508266 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2145-account-create-update-5r2gh"] Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.509769 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2145-account-create-update-5r2gh" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.514556 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.518859 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2145-account-create-update-5r2gh"] Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.552091 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.604136 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdhv5\" (UniqueName: \"kubernetes.io/projected/e7c0bb97-5fda-4dad-a178-1b336bf97c74-kube-api-access-fdhv5\") pod \"glance-db-create-dgrqq\" (UID: \"e7c0bb97-5fda-4dad-a178-1b336bf97c74\") " pod="openstack/glance-db-create-dgrqq" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.604252 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qpvw\" (UniqueName: \"kubernetes.io/projected/49695051-bd3b-4650-8ac7-298d77ffd567-kube-api-access-4qpvw\") pod \"glance-2145-account-create-update-5r2gh\" (UID: \"49695051-bd3b-4650-8ac7-298d77ffd567\") " pod="openstack/glance-2145-account-create-update-5r2gh" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.604297 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7c0bb97-5fda-4dad-a178-1b336bf97c74-operator-scripts\") pod \"glance-db-create-dgrqq\" (UID: \"e7c0bb97-5fda-4dad-a178-1b336bf97c74\") " pod="openstack/glance-db-create-dgrqq" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.604368 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49695051-bd3b-4650-8ac7-298d77ffd567-operator-scripts\") pod \"glance-2145-account-create-update-5r2gh\" (UID: \"49695051-bd3b-4650-8ac7-298d77ffd567\") " pod="openstack/glance-2145-account-create-update-5r2gh" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.605645 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7c0bb97-5fda-4dad-a178-1b336bf97c74-operator-scripts\") pod \"glance-db-create-dgrqq\" (UID: \"e7c0bb97-5fda-4dad-a178-1b336bf97c74\") " pod="openstack/glance-db-create-dgrqq" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.665759 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdhv5\" (UniqueName: \"kubernetes.io/projected/e7c0bb97-5fda-4dad-a178-1b336bf97c74-kube-api-access-fdhv5\") pod \"glance-db-create-dgrqq\" (UID: \"e7c0bb97-5fda-4dad-a178-1b336bf97c74\") " pod="openstack/glance-db-create-dgrqq" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.706569 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49695051-bd3b-4650-8ac7-298d77ffd567-operator-scripts\") pod \"glance-2145-account-create-update-5r2gh\" (UID: \"49695051-bd3b-4650-8ac7-298d77ffd567\") " pod="openstack/glance-2145-account-create-update-5r2gh" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.706766 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qpvw\" (UniqueName: \"kubernetes.io/projected/49695051-bd3b-4650-8ac7-298d77ffd567-kube-api-access-4qpvw\") pod \"glance-2145-account-create-update-5r2gh\" (UID: \"49695051-bd3b-4650-8ac7-298d77ffd567\") " pod="openstack/glance-2145-account-create-update-5r2gh" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.707477 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49695051-bd3b-4650-8ac7-298d77ffd567-operator-scripts\") pod \"glance-2145-account-create-update-5r2gh\" (UID: \"49695051-bd3b-4650-8ac7-298d77ffd567\") " pod="openstack/glance-2145-account-create-update-5r2gh" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.724603 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qpvw\" (UniqueName: \"kubernetes.io/projected/49695051-bd3b-4650-8ac7-298d77ffd567-kube-api-access-4qpvw\") pod \"glance-2145-account-create-update-5r2gh\" (UID: \"49695051-bd3b-4650-8ac7-298d77ffd567\") " pod="openstack/glance-2145-account-create-update-5r2gh" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.735815 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dgrqq" Nov 28 11:23:45 crc kubenswrapper[4772]: I1128 11:23:45.842719 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2145-account-create-update-5r2gh" Nov 28 11:23:46 crc kubenswrapper[4772]: I1128 11:23:46.047199 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:46 crc kubenswrapper[4772]: I1128 11:23:46.128771 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-bf8rd"] Nov 28 11:23:46 crc kubenswrapper[4772]: I1128 11:23:46.204505 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dgrqq"] Nov 28 11:23:46 crc kubenswrapper[4772]: I1128 11:23:46.240977 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dgrqq" event={"ID":"e7c0bb97-5fda-4dad-a178-1b336bf97c74","Type":"ContainerStarted","Data":"4cc7ad485b83bb9a2c6265dcd042578af2642f47923c939b383dd8ae150ed4e3"} Nov 28 11:23:46 crc kubenswrapper[4772]: I1128 11:23:46.241207 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" podUID="58d95407-3b7a-4ada-95c4-dcf358e19f01" containerName="dnsmasq-dns" containerID="cri-o://325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976" gracePeriod=10 Nov 28 11:23:46 crc kubenswrapper[4772]: I1128 11:23:46.417316 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2145-account-create-update-5r2gh"] Nov 28 11:23:46 crc kubenswrapper[4772]: W1128 11:23:46.423182 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49695051_bd3b_4650_8ac7_298d77ffd567.slice/crio-6fa1f086da21a854ac3364c23b924596afd579a098afd75635ba4e6aec929986 WatchSource:0}: Error finding container 6fa1f086da21a854ac3364c23b924596afd579a098afd75635ba4e6aec929986: Status 404 returned error can't find the container with id 6fa1f086da21a854ac3364c23b924596afd579a098afd75635ba4e6aec929986 Nov 28 11:23:46 crc kubenswrapper[4772]: I1128 11:23:46.818414 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:46 crc kubenswrapper[4772]: I1128 11:23:46.933653 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dngx\" (UniqueName: \"kubernetes.io/projected/58d95407-3b7a-4ada-95c4-dcf358e19f01-kube-api-access-7dngx\") pod \"58d95407-3b7a-4ada-95c4-dcf358e19f01\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " Nov 28 11:23:46 crc kubenswrapper[4772]: I1128 11:23:46.933725 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-ovsdbserver-sb\") pod \"58d95407-3b7a-4ada-95c4-dcf358e19f01\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " Nov 28 11:23:46 crc kubenswrapper[4772]: I1128 11:23:46.933835 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-dns-svc\") pod \"58d95407-3b7a-4ada-95c4-dcf358e19f01\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " Nov 28 11:23:46 crc kubenswrapper[4772]: I1128 11:23:46.934274 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-config\") pod \"58d95407-3b7a-4ada-95c4-dcf358e19f01\" (UID: \"58d95407-3b7a-4ada-95c4-dcf358e19f01\") " Nov 28 11:23:46 crc kubenswrapper[4772]: I1128 11:23:46.963415 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d95407-3b7a-4ada-95c4-dcf358e19f01-kube-api-access-7dngx" (OuterVolumeSpecName: "kube-api-access-7dngx") pod "58d95407-3b7a-4ada-95c4-dcf358e19f01" (UID: "58d95407-3b7a-4ada-95c4-dcf358e19f01"). InnerVolumeSpecName "kube-api-access-7dngx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.033266 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-config" (OuterVolumeSpecName: "config") pod "58d95407-3b7a-4ada-95c4-dcf358e19f01" (UID: "58d95407-3b7a-4ada-95c4-dcf358e19f01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.038625 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.038670 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dngx\" (UniqueName: \"kubernetes.io/projected/58d95407-3b7a-4ada-95c4-dcf358e19f01-kube-api-access-7dngx\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.081011 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "58d95407-3b7a-4ada-95c4-dcf358e19f01" (UID: "58d95407-3b7a-4ada-95c4-dcf358e19f01"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.082523 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9vs4q"] Nov 28 11:23:47 crc kubenswrapper[4772]: E1128 11:23:47.083042 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d95407-3b7a-4ada-95c4-dcf358e19f01" containerName="dnsmasq-dns" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.083059 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d95407-3b7a-4ada-95c4-dcf358e19f01" containerName="dnsmasq-dns" Nov 28 11:23:47 crc kubenswrapper[4772]: E1128 11:23:47.083075 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d95407-3b7a-4ada-95c4-dcf358e19f01" containerName="init" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.083080 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d95407-3b7a-4ada-95c4-dcf358e19f01" containerName="init" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.083280 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d95407-3b7a-4ada-95c4-dcf358e19f01" containerName="dnsmasq-dns" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.084741 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.100557 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9vs4q"] Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.103836 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "58d95407-3b7a-4ada-95c4-dcf358e19f01" (UID: "58d95407-3b7a-4ada-95c4-dcf358e19f01"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.141385 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.141439 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58d95407-3b7a-4ada-95c4-dcf358e19f01-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.243620 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.243707 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-config\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.243763 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsljk\" (UniqueName: \"kubernetes.io/projected/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-kube-api-access-gsljk\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.243789 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.243823 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.253059 4772 generic.go:334] "Generic (PLEG): container finished" podID="49695051-bd3b-4650-8ac7-298d77ffd567" containerID="87aee1b929be7af24244766c2f9dd1484218f1fec758409170c0654e4cd90347" exitCode=0 Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.253143 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2145-account-create-update-5r2gh" event={"ID":"49695051-bd3b-4650-8ac7-298d77ffd567","Type":"ContainerDied","Data":"87aee1b929be7af24244766c2f9dd1484218f1fec758409170c0654e4cd90347"} Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.253186 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2145-account-create-update-5r2gh" event={"ID":"49695051-bd3b-4650-8ac7-298d77ffd567","Type":"ContainerStarted","Data":"6fa1f086da21a854ac3364c23b924596afd579a098afd75635ba4e6aec929986"} Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.257990 4772 generic.go:334] "Generic (PLEG): container finished" podID="e7c0bb97-5fda-4dad-a178-1b336bf97c74" containerID="6e1b8590bc271c8ed2abc0c8918619045fdfacff933981cf83b8f057e707c316" exitCode=0 Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.258048 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dgrqq" event={"ID":"e7c0bb97-5fda-4dad-a178-1b336bf97c74","Type":"ContainerDied","Data":"6e1b8590bc271c8ed2abc0c8918619045fdfacff933981cf83b8f057e707c316"} Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.261921 4772 generic.go:334] "Generic (PLEG): container finished" podID="58d95407-3b7a-4ada-95c4-dcf358e19f01" containerID="325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976" exitCode=0 Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.261961 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" event={"ID":"58d95407-3b7a-4ada-95c4-dcf358e19f01","Type":"ContainerDied","Data":"325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976"} Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.261979 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" event={"ID":"58d95407-3b7a-4ada-95c4-dcf358e19f01","Type":"ContainerDied","Data":"2f1b220bc6a9302439fc238c7a2849f0f1eccaabc833e64ab84ec9c0fe261073"} Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.262001 4772 scope.go:117] "RemoveContainer" containerID="325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.262130 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-bf8rd" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.295708 4772 scope.go:117] "RemoveContainer" containerID="f0629b22a328fcd76b28f2d498300fce43990ca58e5af212a4b6e4d817ed3f61" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.338541 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-bf8rd"] Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.345736 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.345806 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-config\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.345843 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsljk\" (UniqueName: \"kubernetes.io/projected/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-kube-api-access-gsljk\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.345864 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.345898 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.346932 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.347491 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.347922 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.348292 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-config\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.350549 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-bf8rd"] Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.354079 4772 scope.go:117] "RemoveContainer" containerID="325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976" Nov 28 11:23:47 crc kubenswrapper[4772]: E1128 11:23:47.358953 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976\": container with ID starting with 325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976 not found: ID does not exist" containerID="325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.358996 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976"} err="failed to get container status \"325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976\": rpc error: code = NotFound desc = could not find container \"325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976\": container with ID starting with 325aac35ee9c06183a1cfce90a8e9a8430330b687c42121863426c36a63bb976 not found: ID does not exist" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.359027 4772 scope.go:117] "RemoveContainer" containerID="f0629b22a328fcd76b28f2d498300fce43990ca58e5af212a4b6e4d817ed3f61" Nov 28 11:23:47 crc kubenswrapper[4772]: E1128 11:23:47.362933 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0629b22a328fcd76b28f2d498300fce43990ca58e5af212a4b6e4d817ed3f61\": container with ID starting with f0629b22a328fcd76b28f2d498300fce43990ca58e5af212a4b6e4d817ed3f61 not found: ID does not exist" containerID="f0629b22a328fcd76b28f2d498300fce43990ca58e5af212a4b6e4d817ed3f61" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.362997 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0629b22a328fcd76b28f2d498300fce43990ca58e5af212a4b6e4d817ed3f61"} err="failed to get container status \"f0629b22a328fcd76b28f2d498300fce43990ca58e5af212a4b6e4d817ed3f61\": rpc error: code = NotFound desc = could not find container \"f0629b22a328fcd76b28f2d498300fce43990ca58e5af212a4b6e4d817ed3f61\": container with ID starting with f0629b22a328fcd76b28f2d498300fce43990ca58e5af212a4b6e4d817ed3f61 not found: ID does not exist" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.386339 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsljk\" (UniqueName: \"kubernetes.io/projected/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-kube-api-access-gsljk\") pod \"dnsmasq-dns-b8fbc5445-9vs4q\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.426665 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:47 crc kubenswrapper[4772]: I1128 11:23:47.958887 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9vs4q"] Nov 28 11:23:47 crc kubenswrapper[4772]: W1128 11:23:47.970129 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddae7cb7c_0b89_4e5c_b8d5_50631df88ac4.slice/crio-8113b6cbff9e53cb513947bb3e11a44977c9703b842270ee5f75ca5a7b48028d WatchSource:0}: Error finding container 8113b6cbff9e53cb513947bb3e11a44977c9703b842270ee5f75ca5a7b48028d: Status 404 returned error can't find the container with id 8113b6cbff9e53cb513947bb3e11a44977c9703b842270ee5f75ca5a7b48028d Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.014576 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d95407-3b7a-4ada-95c4-dcf358e19f01" path="/var/lib/kubelet/pods/58d95407-3b7a-4ada-95c4-dcf358e19f01/volumes" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.161985 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.229245 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.234602 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.247903 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-k2dsf" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.248130 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.248313 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.248471 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.289086 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" event={"ID":"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4","Type":"ContainerStarted","Data":"8113b6cbff9e53cb513947bb3e11a44977c9703b842270ee5f75ca5a7b48028d"} Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.402839 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dp2h\" (UniqueName: \"kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-kube-api-access-4dp2h\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.402936 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e9a12326-4e22-49fc-a6d5-b103867d9d0c-lock\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.403028 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.403320 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e9a12326-4e22-49fc-a6d5-b103867d9d0c-cache\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.403386 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.504821 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e9a12326-4e22-49fc-a6d5-b103867d9d0c-lock\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.504880 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.504921 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e9a12326-4e22-49fc-a6d5-b103867d9d0c-cache\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.504950 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.505001 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dp2h\" (UniqueName: \"kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-kube-api-access-4dp2h\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: E1128 11:23:48.505431 4772 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 11:23:48 crc kubenswrapper[4772]: E1128 11:23:48.505447 4772 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 11:23:48 crc kubenswrapper[4772]: E1128 11:23:48.505488 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift podName:e9a12326-4e22-49fc-a6d5-b103867d9d0c nodeName:}" failed. No retries permitted until 2025-11-28 11:23:49.00547212 +0000 UTC m=+1027.328715347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift") pod "swift-storage-0" (UID: "e9a12326-4e22-49fc-a6d5-b103867d9d0c") : configmap "swift-ring-files" not found Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.506131 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e9a12326-4e22-49fc-a6d5-b103867d9d0c-cache\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.506316 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e9a12326-4e22-49fc-a6d5-b103867d9d0c-lock\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.506491 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.527768 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dp2h\" (UniqueName: \"kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-kube-api-access-4dp2h\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.536213 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.769464 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2145-account-create-update-5r2gh" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.817238 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-xn77j"] Nov 28 11:23:48 crc kubenswrapper[4772]: E1128 11:23:48.828248 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49695051-bd3b-4650-8ac7-298d77ffd567" containerName="mariadb-account-create-update" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.828316 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="49695051-bd3b-4650-8ac7-298d77ffd567" containerName="mariadb-account-create-update" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.828602 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="49695051-bd3b-4650-8ac7-298d77ffd567" containerName="mariadb-account-create-update" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.829424 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.835724 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.835999 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.847301 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.863549 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dgrqq" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.868149 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-xn77j"] Nov 28 11:23:48 crc kubenswrapper[4772]: E1128 11:23:48.880251 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-cph2v ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-xn77j" podUID="4e15a4cb-b921-4f8d-887b-ce0c7dbe2889" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.897446 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wmtrl"] Nov 28 11:23:48 crc kubenswrapper[4772]: E1128 11:23:48.897930 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c0bb97-5fda-4dad-a178-1b336bf97c74" containerName="mariadb-database-create" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.897947 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c0bb97-5fda-4dad-a178-1b336bf97c74" containerName="mariadb-database-create" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.898125 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c0bb97-5fda-4dad-a178-1b336bf97c74" containerName="mariadb-database-create" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.898784 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.914793 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qpvw\" (UniqueName: \"kubernetes.io/projected/49695051-bd3b-4650-8ac7-298d77ffd567-kube-api-access-4qpvw\") pod \"49695051-bd3b-4650-8ac7-298d77ffd567\" (UID: \"49695051-bd3b-4650-8ac7-298d77ffd567\") " Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.914919 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49695051-bd3b-4650-8ac7-298d77ffd567-operator-scripts\") pod \"49695051-bd3b-4650-8ac7-298d77ffd567\" (UID: \"49695051-bd3b-4650-8ac7-298d77ffd567\") " Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.915415 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-dispersionconf\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.915453 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-ring-data-devices\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.915505 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-scripts\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.915564 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-swiftconf\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.915614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-etc-swift\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.915645 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cph2v\" (UniqueName: \"kubernetes.io/projected/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-kube-api-access-cph2v\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.915663 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-combined-ca-bundle\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.917234 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49695051-bd3b-4650-8ac7-298d77ffd567-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49695051-bd3b-4650-8ac7-298d77ffd567" (UID: "49695051-bd3b-4650-8ac7-298d77ffd567"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.920350 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49695051-bd3b-4650-8ac7-298d77ffd567-kube-api-access-4qpvw" (OuterVolumeSpecName: "kube-api-access-4qpvw") pod "49695051-bd3b-4650-8ac7-298d77ffd567" (UID: "49695051-bd3b-4650-8ac7-298d77ffd567"). InnerVolumeSpecName "kube-api-access-4qpvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.924896 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-xn77j"] Nov 28 11:23:48 crc kubenswrapper[4772]: I1128 11:23:48.933401 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wmtrl"] Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.017476 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7c0bb97-5fda-4dad-a178-1b336bf97c74-operator-scripts\") pod \"e7c0bb97-5fda-4dad-a178-1b336bf97c74\" (UID: \"e7c0bb97-5fda-4dad-a178-1b336bf97c74\") " Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.017607 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdhv5\" (UniqueName: \"kubernetes.io/projected/e7c0bb97-5fda-4dad-a178-1b336bf97c74-kube-api-access-fdhv5\") pod \"e7c0bb97-5fda-4dad-a178-1b336bf97c74\" (UID: \"e7c0bb97-5fda-4dad-a178-1b336bf97c74\") " Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018095 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cph2v\" (UniqueName: \"kubernetes.io/projected/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-kube-api-access-cph2v\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018098 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c0bb97-5fda-4dad-a178-1b336bf97c74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7c0bb97-5fda-4dad-a178-1b336bf97c74" (UID: "e7c0bb97-5fda-4dad-a178-1b336bf97c74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018126 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-combined-ca-bundle\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018283 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-combined-ca-bundle\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018381 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-dispersionconf\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018419 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-ring-data-devices\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018454 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfc8j\" (UniqueName: \"kubernetes.io/projected/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-kube-api-access-lfc8j\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018498 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018562 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-scripts\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018592 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-scripts\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018685 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-swiftconf\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: E1128 11:23:49.018730 4772 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 11:23:49 crc kubenswrapper[4772]: E1128 11:23:49.018762 4772 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018787 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-dispersionconf\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: E1128 11:23:49.018816 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift podName:e9a12326-4e22-49fc-a6d5-b103867d9d0c nodeName:}" failed. No retries permitted until 2025-11-28 11:23:50.018794968 +0000 UTC m=+1028.342038195 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift") pod "swift-storage-0" (UID: "e9a12326-4e22-49fc-a6d5-b103867d9d0c") : configmap "swift-ring-files" not found Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018848 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-etc-swift\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.018901 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-swiftconf\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.019069 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-ring-data-devices\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.019118 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-etc-swift\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.019276 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7c0bb97-5fda-4dad-a178-1b336bf97c74-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.019299 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qpvw\" (UniqueName: \"kubernetes.io/projected/49695051-bd3b-4650-8ac7-298d77ffd567-kube-api-access-4qpvw\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.019317 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49695051-bd3b-4650-8ac7-298d77ffd567-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.019546 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-ring-data-devices\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.019635 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-scripts\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.019702 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-etc-swift\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.023886 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c0bb97-5fda-4dad-a178-1b336bf97c74-kube-api-access-fdhv5" (OuterVolumeSpecName: "kube-api-access-fdhv5") pod "e7c0bb97-5fda-4dad-a178-1b336bf97c74" (UID: "e7c0bb97-5fda-4dad-a178-1b336bf97c74"). InnerVolumeSpecName "kube-api-access-fdhv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.023968 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-dispersionconf\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.024042 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-swiftconf\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.024528 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-combined-ca-bundle\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.042231 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cph2v\" (UniqueName: \"kubernetes.io/projected/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-kube-api-access-cph2v\") pod \"swift-ring-rebalance-xn77j\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.120898 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-swiftconf\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.120979 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-dispersionconf\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.121008 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-etc-swift\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.121062 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-ring-data-devices\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.121111 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-combined-ca-bundle\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.121142 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfc8j\" (UniqueName: \"kubernetes.io/projected/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-kube-api-access-lfc8j\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.121189 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-scripts\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.121252 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdhv5\" (UniqueName: \"kubernetes.io/projected/e7c0bb97-5fda-4dad-a178-1b336bf97c74-kube-api-access-fdhv5\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.121747 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-etc-swift\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.122199 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-scripts\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.122491 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-ring-data-devices\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.124733 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-dispersionconf\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.125038 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-combined-ca-bundle\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.125410 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-swiftconf\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.143709 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfc8j\" (UniqueName: \"kubernetes.io/projected/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-kube-api-access-lfc8j\") pod \"swift-ring-rebalance-wmtrl\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.161853 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.218123 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.265682 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.309961 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2145-account-create-update-5r2gh" event={"ID":"49695051-bd3b-4650-8ac7-298d77ffd567","Type":"ContainerDied","Data":"6fa1f086da21a854ac3364c23b924596afd579a098afd75635ba4e6aec929986"} Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.310018 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fa1f086da21a854ac3364c23b924596afd579a098afd75635ba4e6aec929986" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.310104 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2145-account-create-update-5r2gh" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.335201 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dgrqq" event={"ID":"e7c0bb97-5fda-4dad-a178-1b336bf97c74","Type":"ContainerDied","Data":"4cc7ad485b83bb9a2c6265dcd042578af2642f47923c939b383dd8ae150ed4e3"} Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.335258 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cc7ad485b83bb9a2c6265dcd042578af2642f47923c939b383dd8ae150ed4e3" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.335370 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dgrqq" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.343204 4772 generic.go:334] "Generic (PLEG): container finished" podID="dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" containerID="0961d02c2007cad4d5784da8e186ee4cd9319fa79da6e19f2b93737311f54878" exitCode=0 Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.343306 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.343598 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" event={"ID":"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4","Type":"ContainerDied","Data":"0961d02c2007cad4d5784da8e186ee4cd9319fa79da6e19f2b93737311f54878"} Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.363597 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.426452 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-combined-ca-bundle\") pod \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.426632 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-dispersionconf\") pod \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.426708 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-ring-data-devices\") pod \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.426766 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-etc-swift\") pod \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.426794 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-scripts\") pod \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.426895 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-swiftconf\") pod \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.426969 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cph2v\" (UniqueName: \"kubernetes.io/projected/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-kube-api-access-cph2v\") pod \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\" (UID: \"4e15a4cb-b921-4f8d-887b-ce0c7dbe2889\") " Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.427643 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889" (UID: "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.427733 4772 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.428806 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-scripts" (OuterVolumeSpecName: "scripts") pod "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889" (UID: "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.430522 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889" (UID: "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.432246 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889" (UID: "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.433104 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889" (UID: "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.434022 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-kube-api-access-cph2v" (OuterVolumeSpecName: "kube-api-access-cph2v") pod "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889" (UID: "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889"). InnerVolumeSpecName "kube-api-access-cph2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.435263 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889" (UID: "4e15a4cb-b921-4f8d-887b-ce0c7dbe2889"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.529660 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cph2v\" (UniqueName: \"kubernetes.io/projected/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-kube-api-access-cph2v\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.529688 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.529699 4772 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.529708 4772 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.529717 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.529730 4772 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:49 crc kubenswrapper[4772]: I1128 11:23:49.808528 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wmtrl"] Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.039089 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:50 crc kubenswrapper[4772]: E1128 11:23:50.039450 4772 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 11:23:50 crc kubenswrapper[4772]: E1128 11:23:50.039506 4772 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 11:23:50 crc kubenswrapper[4772]: E1128 11:23:50.039609 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift podName:e9a12326-4e22-49fc-a6d5-b103867d9d0c nodeName:}" failed. No retries permitted until 2025-11-28 11:23:52.039575193 +0000 UTC m=+1030.362818450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift") pod "swift-storage-0" (UID: "e9a12326-4e22-49fc-a6d5-b103867d9d0c") : configmap "swift-ring-files" not found Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.353158 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wmtrl" event={"ID":"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6","Type":"ContainerStarted","Data":"5af81524cd1cc2381c58214a2399893c42cad2e22d49d1fe6c54c7a776f077d6"} Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.355293 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" event={"ID":"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4","Type":"ContainerStarted","Data":"0521a88c714dc05b0b9abe12b7e8373b5579b11282455add165c1e84e55cbf5f"} Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.355319 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xn77j" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.355504 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.381826 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" podStartSLOduration=3.381805482 podStartE2EDuration="3.381805482s" podCreationTimestamp="2025-11-28 11:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:23:50.37768766 +0000 UTC m=+1028.700930887" watchObservedRunningTime="2025-11-28 11:23:50.381805482 +0000 UTC m=+1028.705048709" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.436429 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-xn77j"] Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.449097 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-xn77j"] Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.751310 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-r6s87"] Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.753268 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.757619 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vv5mt" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.757781 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.764754 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r6s87"] Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.855465 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-combined-ca-bundle\") pod \"glance-db-sync-r6s87\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.855543 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-config-data\") pod \"glance-db-sync-r6s87\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.855773 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-db-sync-config-data\") pod \"glance-db-sync-r6s87\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.855805 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kdt6\" (UniqueName: \"kubernetes.io/projected/09675021-323d-41ee-aaaa-bee4e83e2544-kube-api-access-7kdt6\") pod \"glance-db-sync-r6s87\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.957304 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kdt6\" (UniqueName: \"kubernetes.io/projected/09675021-323d-41ee-aaaa-bee4e83e2544-kube-api-access-7kdt6\") pod \"glance-db-sync-r6s87\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.957431 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-combined-ca-bundle\") pod \"glance-db-sync-r6s87\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.957461 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-config-data\") pod \"glance-db-sync-r6s87\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.957547 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-db-sync-config-data\") pod \"glance-db-sync-r6s87\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.965680 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-combined-ca-bundle\") pod \"glance-db-sync-r6s87\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.969190 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-config-data\") pod \"glance-db-sync-r6s87\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.975396 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-db-sync-config-data\") pod \"glance-db-sync-r6s87\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:50 crc kubenswrapper[4772]: I1128 11:23:50.977501 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kdt6\" (UniqueName: \"kubernetes.io/projected/09675021-323d-41ee-aaaa-bee4e83e2544-kube-api-access-7kdt6\") pod \"glance-db-sync-r6s87\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:51 crc kubenswrapper[4772]: I1128 11:23:51.108539 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r6s87" Nov 28 11:23:51 crc kubenswrapper[4772]: I1128 11:23:51.745240 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r6s87"] Nov 28 11:23:52 crc kubenswrapper[4772]: I1128 11:23:52.012802 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e15a4cb-b921-4f8d-887b-ce0c7dbe2889" path="/var/lib/kubelet/pods/4e15a4cb-b921-4f8d-887b-ce0c7dbe2889/volumes" Nov 28 11:23:52 crc kubenswrapper[4772]: I1128 11:23:52.081912 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:52 crc kubenswrapper[4772]: E1128 11:23:52.083299 4772 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 11:23:52 crc kubenswrapper[4772]: E1128 11:23:52.083324 4772 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 11:23:52 crc kubenswrapper[4772]: E1128 11:23:52.083384 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift podName:e9a12326-4e22-49fc-a6d5-b103867d9d0c nodeName:}" failed. No retries permitted until 2025-11-28 11:23:56.083351253 +0000 UTC m=+1034.406594480 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift") pod "swift-storage-0" (UID: "e9a12326-4e22-49fc-a6d5-b103867d9d0c") : configmap "swift-ring-files" not found Nov 28 11:23:52 crc kubenswrapper[4772]: I1128 11:23:52.627918 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 28 11:23:53 crc kubenswrapper[4772]: I1128 11:23:53.896825 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:23:53 crc kubenswrapper[4772]: I1128 11:23:53.896893 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:23:53 crc kubenswrapper[4772]: W1128 11:23:53.905845 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09675021_323d_41ee_aaaa_bee4e83e2544.slice/crio-9575be3c4cf9b38c451d26c9ffced81e0094d6c2d17498b31edf41df229a0f82 WatchSource:0}: Error finding container 9575be3c4cf9b38c451d26c9ffced81e0094d6c2d17498b31edf41df229a0f82: Status 404 returned error can't find the container with id 9575be3c4cf9b38c451d26c9ffced81e0094d6c2d17498b31edf41df229a0f82 Nov 28 11:23:54 crc kubenswrapper[4772]: I1128 11:23:54.408903 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wmtrl" event={"ID":"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6","Type":"ContainerStarted","Data":"fe36ee646e73bcd6871b102fb7b85df2479eb68129ca66e6337c53e9346a6548"} Nov 28 11:23:54 crc kubenswrapper[4772]: I1128 11:23:54.412591 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r6s87" event={"ID":"09675021-323d-41ee-aaaa-bee4e83e2544","Type":"ContainerStarted","Data":"9575be3c4cf9b38c451d26c9ffced81e0094d6c2d17498b31edf41df229a0f82"} Nov 28 11:23:54 crc kubenswrapper[4772]: I1128 11:23:54.435684 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wmtrl" podStartSLOduration=2.255341459 podStartE2EDuration="6.435655565s" podCreationTimestamp="2025-11-28 11:23:48 +0000 UTC" firstStartedPulling="2025-11-28 11:23:49.779108321 +0000 UTC m=+1028.102351548" lastFinishedPulling="2025-11-28 11:23:53.959422427 +0000 UTC m=+1032.282665654" observedRunningTime="2025-11-28 11:23:54.434212075 +0000 UTC m=+1032.757455322" watchObservedRunningTime="2025-11-28 11:23:54.435655565 +0000 UTC m=+1032.758898812" Nov 28 11:23:55 crc kubenswrapper[4772]: I1128 11:23:55.653202 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qzrrh" podUID="42fee486-89c9-4f0e-9db6-ac695b62a588" containerName="ovn-controller" probeResult="failure" output=< Nov 28 11:23:55 crc kubenswrapper[4772]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 11:23:55 crc kubenswrapper[4772]: > Nov 28 11:23:55 crc kubenswrapper[4772]: I1128 11:23:55.690324 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:23:56 crc kubenswrapper[4772]: I1128 11:23:56.184720 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:23:56 crc kubenswrapper[4772]: E1128 11:23:56.185102 4772 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 11:23:56 crc kubenswrapper[4772]: E1128 11:23:56.185125 4772 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 11:23:56 crc kubenswrapper[4772]: E1128 11:23:56.185190 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift podName:e9a12326-4e22-49fc-a6d5-b103867d9d0c nodeName:}" failed. No retries permitted until 2025-11-28 11:24:04.185167492 +0000 UTC m=+1042.508410729 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift") pod "swift-storage-0" (UID: "e9a12326-4e22-49fc-a6d5-b103867d9d0c") : configmap "swift-ring-files" not found Nov 28 11:23:57 crc kubenswrapper[4772]: I1128 11:23:57.428289 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:23:57 crc kubenswrapper[4772]: I1128 11:23:57.535637 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-s7mtd"] Nov 28 11:23:57 crc kubenswrapper[4772]: I1128 11:23:57.536081 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-s7mtd" podUID="78bd45e2-bdb4-4286-905f-9a5470c9728b" containerName="dnsmasq-dns" containerID="cri-o://1741d699a20ff41779fe5fabdebfa740333826a7c09c7a922a146dd53669df46" gracePeriod=10 Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.492732 4772 generic.go:334] "Generic (PLEG): container finished" podID="78bd45e2-bdb4-4286-905f-9a5470c9728b" containerID="1741d699a20ff41779fe5fabdebfa740333826a7c09c7a922a146dd53669df46" exitCode=0 Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.493654 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-s7mtd" event={"ID":"78bd45e2-bdb4-4286-905f-9a5470c9728b","Type":"ContainerDied","Data":"1741d699a20ff41779fe5fabdebfa740333826a7c09c7a922a146dd53669df46"} Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.624931 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.746795 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-dns-svc\") pod \"78bd45e2-bdb4-4286-905f-9a5470c9728b\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.746870 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-config\") pod \"78bd45e2-bdb4-4286-905f-9a5470c9728b\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.746944 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-ovsdbserver-sb\") pod \"78bd45e2-bdb4-4286-905f-9a5470c9728b\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.746998 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-ovsdbserver-nb\") pod \"78bd45e2-bdb4-4286-905f-9a5470c9728b\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.747177 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqwlq\" (UniqueName: \"kubernetes.io/projected/78bd45e2-bdb4-4286-905f-9a5470c9728b-kube-api-access-wqwlq\") pod \"78bd45e2-bdb4-4286-905f-9a5470c9728b\" (UID: \"78bd45e2-bdb4-4286-905f-9a5470c9728b\") " Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.755100 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78bd45e2-bdb4-4286-905f-9a5470c9728b-kube-api-access-wqwlq" (OuterVolumeSpecName: "kube-api-access-wqwlq") pod "78bd45e2-bdb4-4286-905f-9a5470c9728b" (UID: "78bd45e2-bdb4-4286-905f-9a5470c9728b"). InnerVolumeSpecName "kube-api-access-wqwlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.791165 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78bd45e2-bdb4-4286-905f-9a5470c9728b" (UID: "78bd45e2-bdb4-4286-905f-9a5470c9728b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.798702 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "78bd45e2-bdb4-4286-905f-9a5470c9728b" (UID: "78bd45e2-bdb4-4286-905f-9a5470c9728b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.799657 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "78bd45e2-bdb4-4286-905f-9a5470c9728b" (UID: "78bd45e2-bdb4-4286-905f-9a5470c9728b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.809689 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-config" (OuterVolumeSpecName: "config") pod "78bd45e2-bdb4-4286-905f-9a5470c9728b" (UID: "78bd45e2-bdb4-4286-905f-9a5470c9728b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.848735 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqwlq\" (UniqueName: \"kubernetes.io/projected/78bd45e2-bdb4-4286-905f-9a5470c9728b-kube-api-access-wqwlq\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.848894 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.848969 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.849035 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:58 crc kubenswrapper[4772]: I1128 11:23:58.849097 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bd45e2-bdb4-4286-905f-9a5470c9728b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 11:23:59 crc kubenswrapper[4772]: I1128 11:23:59.505374 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-s7mtd" event={"ID":"78bd45e2-bdb4-4286-905f-9a5470c9728b","Type":"ContainerDied","Data":"0a15b5aa68bb177058f3fdad695dae844fbc444f4a2f94a4092c6b05bf8bbefb"} Nov 28 11:23:59 crc kubenswrapper[4772]: I1128 11:23:59.505448 4772 scope.go:117] "RemoveContainer" containerID="1741d699a20ff41779fe5fabdebfa740333826a7c09c7a922a146dd53669df46" Nov 28 11:23:59 crc kubenswrapper[4772]: I1128 11:23:59.505456 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-s7mtd" Nov 28 11:23:59 crc kubenswrapper[4772]: I1128 11:23:59.557642 4772 scope.go:117] "RemoveContainer" containerID="8c7dab3be7f99b1bea4473f041f4b5bf1f1c9b50df90f5f62c273dfb9b7ce25c" Nov 28 11:23:59 crc kubenswrapper[4772]: I1128 11:23:59.563625 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-s7mtd"] Nov 28 11:23:59 crc kubenswrapper[4772]: I1128 11:23:59.608198 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-s7mtd"] Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.010591 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78bd45e2-bdb4-4286-905f-9a5470c9728b" path="/var/lib/kubelet/pods/78bd45e2-bdb4-4286-905f-9a5470c9728b/volumes" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.664483 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qzrrh" podUID="42fee486-89c9-4f0e-9db6-ac695b62a588" containerName="ovn-controller" probeResult="failure" output=< Nov 28 11:24:00 crc kubenswrapper[4772]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 11:24:00 crc kubenswrapper[4772]: > Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.701936 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gjbm2" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.936199 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qzrrh-config-5r4s2"] Nov 28 11:24:00 crc kubenswrapper[4772]: E1128 11:24:00.937660 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bd45e2-bdb4-4286-905f-9a5470c9728b" containerName="init" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.937865 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bd45e2-bdb4-4286-905f-9a5470c9728b" containerName="init" Nov 28 11:24:00 crc kubenswrapper[4772]: E1128 11:24:00.937946 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bd45e2-bdb4-4286-905f-9a5470c9728b" containerName="dnsmasq-dns" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.937999 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bd45e2-bdb4-4286-905f-9a5470c9728b" containerName="dnsmasq-dns" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.938408 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="78bd45e2-bdb4-4286-905f-9a5470c9728b" containerName="dnsmasq-dns" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.939418 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.944941 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.947570 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qzrrh-config-5r4s2"] Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.992502 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1986ff74-eed0-4c33-bba8-f3c33a6492c0-scripts\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.992846 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-run\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.992962 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-run-ovn\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.993102 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq5lg\" (UniqueName: \"kubernetes.io/projected/1986ff74-eed0-4c33-bba8-f3c33a6492c0-kube-api-access-jq5lg\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.993233 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-log-ovn\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:00 crc kubenswrapper[4772]: I1128 11:24:00.993410 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1986ff74-eed0-4c33-bba8-f3c33a6492c0-additional-scripts\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.094813 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1986ff74-eed0-4c33-bba8-f3c33a6492c0-additional-scripts\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.095145 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1986ff74-eed0-4c33-bba8-f3c33a6492c0-scripts\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.095222 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-run\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.095312 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-run-ovn\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.095463 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq5lg\" (UniqueName: \"kubernetes.io/projected/1986ff74-eed0-4c33-bba8-f3c33a6492c0-kube-api-access-jq5lg\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.095635 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-log-ovn\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.095791 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1986ff74-eed0-4c33-bba8-f3c33a6492c0-additional-scripts\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.096563 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-run-ovn\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.096698 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-run\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.097098 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-log-ovn\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.098873 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1986ff74-eed0-4c33-bba8-f3c33a6492c0-scripts\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.123244 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq5lg\" (UniqueName: \"kubernetes.io/projected/1986ff74-eed0-4c33-bba8-f3c33a6492c0-kube-api-access-jq5lg\") pod \"ovn-controller-qzrrh-config-5r4s2\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:01 crc kubenswrapper[4772]: I1128 11:24:01.278105 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:02 crc kubenswrapper[4772]: I1128 11:24:02.540029 4772 generic.go:334] "Generic (PLEG): container finished" podID="1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6" containerID="fe36ee646e73bcd6871b102fb7b85df2479eb68129ca66e6337c53e9346a6548" exitCode=0 Nov 28 11:24:02 crc kubenswrapper[4772]: I1128 11:24:02.540081 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wmtrl" event={"ID":"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6","Type":"ContainerDied","Data":"fe36ee646e73bcd6871b102fb7b85df2479eb68129ca66e6337c53e9346a6548"} Nov 28 11:24:04 crc kubenswrapper[4772]: I1128 11:24:04.261751 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:24:04 crc kubenswrapper[4772]: I1128 11:24:04.274567 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e9a12326-4e22-49fc-a6d5-b103867d9d0c-etc-swift\") pod \"swift-storage-0\" (UID: \"e9a12326-4e22-49fc-a6d5-b103867d9d0c\") " pod="openstack/swift-storage-0" Nov 28 11:24:04 crc kubenswrapper[4772]: I1128 11:24:04.477817 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 11:24:05 crc kubenswrapper[4772]: I1128 11:24:05.646858 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qzrrh" podUID="42fee486-89c9-4f0e-9db6-ac695b62a588" containerName="ovn-controller" probeResult="failure" output=< Nov 28 11:24:05 crc kubenswrapper[4772]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 11:24:05 crc kubenswrapper[4772]: > Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.380000 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.478838 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-combined-ca-bundle\") pod \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.478908 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-dispersionconf\") pod \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.479033 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-ring-data-devices\") pod \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.479095 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-swiftconf\") pod \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.479137 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-scripts\") pod \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.479166 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-etc-swift\") pod \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.479206 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfc8j\" (UniqueName: \"kubernetes.io/projected/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-kube-api-access-lfc8j\") pod \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\" (UID: \"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6\") " Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.480636 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6" (UID: "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.481819 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6" (UID: "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.490502 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-kube-api-access-lfc8j" (OuterVolumeSpecName: "kube-api-access-lfc8j") pod "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6" (UID: "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6"). InnerVolumeSpecName "kube-api-access-lfc8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.496405 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6" (UID: "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.527692 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6" (UID: "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.537153 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-scripts" (OuterVolumeSpecName: "scripts") pod "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6" (UID: "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.540614 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6" (UID: "1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.585809 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.586346 4772 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.586380 4772 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.586393 4772 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.586406 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.586419 4772 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.586434 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfc8j\" (UniqueName: \"kubernetes.io/projected/1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6-kube-api-access-lfc8j\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.617448 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wmtrl" event={"ID":"1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6","Type":"ContainerDied","Data":"5af81524cd1cc2381c58214a2399893c42cad2e22d49d1fe6c54c7a776f077d6"} Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.617501 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af81524cd1cc2381c58214a2399893c42cad2e22d49d1fe6c54c7a776f077d6" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.617504 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wmtrl" Nov 28 11:24:08 crc kubenswrapper[4772]: I1128 11:24:08.851126 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qzrrh-config-5r4s2"] Nov 28 11:24:09 crc kubenswrapper[4772]: I1128 11:24:09.051280 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 11:24:09 crc kubenswrapper[4772]: W1128 11:24:09.052145 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9a12326_4e22_49fc_a6d5_b103867d9d0c.slice/crio-72d67e6299872dfc29cfbd23d96707c916fd58c295c29c411d8a5c81b3a5fdfd WatchSource:0}: Error finding container 72d67e6299872dfc29cfbd23d96707c916fd58c295c29c411d8a5c81b3a5fdfd: Status 404 returned error can't find the container with id 72d67e6299872dfc29cfbd23d96707c916fd58c295c29c411d8a5c81b3a5fdfd Nov 28 11:24:09 crc kubenswrapper[4772]: I1128 11:24:09.633698 4772 generic.go:334] "Generic (PLEG): container finished" podID="1986ff74-eed0-4c33-bba8-f3c33a6492c0" containerID="ae4a513967566458793783d28fc7842e113f09eef0def36ee1433b7011aa8228" exitCode=0 Nov 28 11:24:09 crc kubenswrapper[4772]: I1128 11:24:09.634641 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzrrh-config-5r4s2" event={"ID":"1986ff74-eed0-4c33-bba8-f3c33a6492c0","Type":"ContainerDied","Data":"ae4a513967566458793783d28fc7842e113f09eef0def36ee1433b7011aa8228"} Nov 28 11:24:09 crc kubenswrapper[4772]: I1128 11:24:09.634672 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzrrh-config-5r4s2" event={"ID":"1986ff74-eed0-4c33-bba8-f3c33a6492c0","Type":"ContainerStarted","Data":"7c656f011f5895d0d322df41dd15c59ee3fbcd0d1530795d07a41a003da99e28"} Nov 28 11:24:09 crc kubenswrapper[4772]: I1128 11:24:09.636226 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r6s87" event={"ID":"09675021-323d-41ee-aaaa-bee4e83e2544","Type":"ContainerStarted","Data":"81ca325c59ac763671d22b789a19f04c989f66fd290244f2e41ab1903d6eb651"} Nov 28 11:24:09 crc kubenswrapper[4772]: I1128 11:24:09.638137 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"72d67e6299872dfc29cfbd23d96707c916fd58c295c29c411d8a5c81b3a5fdfd"} Nov 28 11:24:09 crc kubenswrapper[4772]: I1128 11:24:09.694484 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-r6s87" podStartSLOduration=5.187673821 podStartE2EDuration="19.69445684s" podCreationTimestamp="2025-11-28 11:23:50 +0000 UTC" firstStartedPulling="2025-11-28 11:23:53.909029665 +0000 UTC m=+1032.232272892" lastFinishedPulling="2025-11-28 11:24:08.415812684 +0000 UTC m=+1046.739055911" observedRunningTime="2025-11-28 11:24:09.670009214 +0000 UTC m=+1047.993252441" watchObservedRunningTime="2025-11-28 11:24:09.69445684 +0000 UTC m=+1048.017700067" Nov 28 11:24:10 crc kubenswrapper[4772]: I1128 11:24:10.663371 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qzrrh" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.031239 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.139161 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-log-ovn\") pod \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.139222 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-run-ovn\") pod \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.139326 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-run\") pod \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.139397 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1986ff74-eed0-4c33-bba8-f3c33a6492c0" (UID: "1986ff74-eed0-4c33-bba8-f3c33a6492c0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.139440 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1986ff74-eed0-4c33-bba8-f3c33a6492c0-additional-scripts\") pod \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.139425 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-run" (OuterVolumeSpecName: "var-run") pod "1986ff74-eed0-4c33-bba8-f3c33a6492c0" (UID: "1986ff74-eed0-4c33-bba8-f3c33a6492c0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.139517 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1986ff74-eed0-4c33-bba8-f3c33a6492c0" (UID: "1986ff74-eed0-4c33-bba8-f3c33a6492c0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.139535 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1986ff74-eed0-4c33-bba8-f3c33a6492c0-scripts\") pod \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.140639 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1986ff74-eed0-4c33-bba8-f3c33a6492c0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1986ff74-eed0-4c33-bba8-f3c33a6492c0" (UID: "1986ff74-eed0-4c33-bba8-f3c33a6492c0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.140870 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1986ff74-eed0-4c33-bba8-f3c33a6492c0-scripts" (OuterVolumeSpecName: "scripts") pod "1986ff74-eed0-4c33-bba8-f3c33a6492c0" (UID: "1986ff74-eed0-4c33-bba8-f3c33a6492c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.141291 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq5lg\" (UniqueName: \"kubernetes.io/projected/1986ff74-eed0-4c33-bba8-f3c33a6492c0-kube-api-access-jq5lg\") pod \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\" (UID: \"1986ff74-eed0-4c33-bba8-f3c33a6492c0\") " Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.143720 4772 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.143746 4772 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.143757 4772 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1986ff74-eed0-4c33-bba8-f3c33a6492c0-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.143766 4772 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1986ff74-eed0-4c33-bba8-f3c33a6492c0-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.143805 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1986ff74-eed0-4c33-bba8-f3c33a6492c0-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.147835 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1986ff74-eed0-4c33-bba8-f3c33a6492c0-kube-api-access-jq5lg" (OuterVolumeSpecName: "kube-api-access-jq5lg") pod "1986ff74-eed0-4c33-bba8-f3c33a6492c0" (UID: "1986ff74-eed0-4c33-bba8-f3c33a6492c0"). InnerVolumeSpecName "kube-api-access-jq5lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.245284 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq5lg\" (UniqueName: \"kubernetes.io/projected/1986ff74-eed0-4c33-bba8-f3c33a6492c0-kube-api-access-jq5lg\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.660851 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"9b64dd095dedae302ee8872dbb545cb4f71cd6c3458406ffd969c871e5d32f97"} Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.660924 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"e509b334cbca45f7d69afc348750b1eaf2b46b7e2de21fa28deba1cdc6fa5a35"} Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.660942 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"bc0bd34e892fd16130ed4a5eecbe4e9d365371227c42ea2f58cf5d58fd487aea"} Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.660957 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"0dc212e4bb14893df085c4c465fa07a423628ae298a4c048dfcfb28fe09e5c1d"} Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.662853 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qzrrh-config-5r4s2" event={"ID":"1986ff74-eed0-4c33-bba8-f3c33a6492c0","Type":"ContainerDied","Data":"7c656f011f5895d0d322df41dd15c59ee3fbcd0d1530795d07a41a003da99e28"} Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.662893 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c656f011f5895d0d322df41dd15c59ee3fbcd0d1530795d07a41a003da99e28" Nov 28 11:24:11 crc kubenswrapper[4772]: I1128 11:24:11.662940 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qzrrh-config-5r4s2" Nov 28 11:24:12 crc kubenswrapper[4772]: I1128 11:24:12.153199 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qzrrh-config-5r4s2"] Nov 28 11:24:12 crc kubenswrapper[4772]: I1128 11:24:12.164142 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qzrrh-config-5r4s2"] Nov 28 11:24:13 crc kubenswrapper[4772]: I1128 11:24:13.683271 4772 generic.go:334] "Generic (PLEG): container finished" podID="52b2f98f-d36f-4798-9903-1f75498cdb5b" containerID="820b521e8723c89ca4b5ab79f599eaa92d05cdf27d3674e74608ff1f326ac44e" exitCode=0 Nov 28 11:24:13 crc kubenswrapper[4772]: I1128 11:24:13.683397 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52b2f98f-d36f-4798-9903-1f75498cdb5b","Type":"ContainerDied","Data":"820b521e8723c89ca4b5ab79f599eaa92d05cdf27d3674e74608ff1f326ac44e"} Nov 28 11:24:13 crc kubenswrapper[4772]: I1128 11:24:13.686291 4772 generic.go:334] "Generic (PLEG): container finished" podID="4d684e5b-88f5-4004-a176-22ae480daaa6" containerID="ee956eabea4b4338ff07057f9b20ad599a52188d715ed812aea06dde00ec9f7e" exitCode=0 Nov 28 11:24:13 crc kubenswrapper[4772]: I1128 11:24:13.686336 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4d684e5b-88f5-4004-a176-22ae480daaa6","Type":"ContainerDied","Data":"ee956eabea4b4338ff07057f9b20ad599a52188d715ed812aea06dde00ec9f7e"} Nov 28 11:24:14 crc kubenswrapper[4772]: I1128 11:24:14.017122 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1986ff74-eed0-4c33-bba8-f3c33a6492c0" path="/var/lib/kubelet/pods/1986ff74-eed0-4c33-bba8-f3c33a6492c0/volumes" Nov 28 11:24:18 crc kubenswrapper[4772]: I1128 11:24:18.771323 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"f993dc5dce4b7d57fd6d0f426cf38947ce8afd5577e593b9647d1bee2f49a96d"} Nov 28 11:24:18 crc kubenswrapper[4772]: I1128 11:24:18.772446 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"a36c978088712be2b2cd14457d2a3e05652fb3e1ecf2842a812d2c2fec89b549"} Nov 28 11:24:18 crc kubenswrapper[4772]: I1128 11:24:18.772467 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"cddfa83e7781542e4fcde2d36778698950efa7752df6cd5836f1a727477e52eb"} Nov 28 11:24:18 crc kubenswrapper[4772]: I1128 11:24:18.772480 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"0f6cd0b81972716147f70ac32ddf06ea25d24da06b3dd7567916a40e3d77479d"} Nov 28 11:24:18 crc kubenswrapper[4772]: I1128 11:24:18.774571 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4d684e5b-88f5-4004-a176-22ae480daaa6","Type":"ContainerStarted","Data":"e9087979407f97b11ae8526798f63b1fcdb093303aa3e09e4c1a3d05120aeca3"} Nov 28 11:24:18 crc kubenswrapper[4772]: I1128 11:24:18.775707 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 11:24:18 crc kubenswrapper[4772]: I1128 11:24:18.778109 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52b2f98f-d36f-4798-9903-1f75498cdb5b","Type":"ContainerStarted","Data":"c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470"} Nov 28 11:24:18 crc kubenswrapper[4772]: I1128 11:24:18.778658 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:24:18 crc kubenswrapper[4772]: I1128 11:24:18.814261 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.249652243 podStartE2EDuration="1m28.814243374s" podCreationTimestamp="2025-11-28 11:22:50 +0000 UTC" firstStartedPulling="2025-11-28 11:22:52.419303565 +0000 UTC m=+970.742546792" lastFinishedPulling="2025-11-28 11:23:37.983894696 +0000 UTC m=+1016.307137923" observedRunningTime="2025-11-28 11:24:18.812427735 +0000 UTC m=+1057.135670962" watchObservedRunningTime="2025-11-28 11:24:18.814243374 +0000 UTC m=+1057.137486601" Nov 28 11:24:18 crc kubenswrapper[4772]: I1128 11:24:18.858890 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.736627868 podStartE2EDuration="1m29.858862589s" podCreationTimestamp="2025-11-28 11:22:49 +0000 UTC" firstStartedPulling="2025-11-28 11:22:51.861766238 +0000 UTC m=+970.185009465" lastFinishedPulling="2025-11-28 11:23:37.984000959 +0000 UTC m=+1016.307244186" observedRunningTime="2025-11-28 11:24:18.846457682 +0000 UTC m=+1057.169700959" watchObservedRunningTime="2025-11-28 11:24:18.858862589 +0000 UTC m=+1057.182105836" Nov 28 11:24:20 crc kubenswrapper[4772]: I1128 11:24:20.799641 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"9d39bf0ab06fc01cf1d725db67113a1e2e8777075a2b24eb8e6416bc75995b0e"} Nov 28 11:24:20 crc kubenswrapper[4772]: I1128 11:24:20.800509 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"ea75e65bf2450ad5d4245d1679fa83778397d629176e9cf8eefa0b051eae1631"} Nov 28 11:24:20 crc kubenswrapper[4772]: I1128 11:24:20.800521 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"05e4da65fc388ba086629eac10447c5bd1cef949a24380228f2ae3ecc8417d93"} Nov 28 11:24:21 crc kubenswrapper[4772]: I1128 11:24:21.828837 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"57408878eee7468c2db0563a7bfef39fcc9779d345f57e493412d9cc47cb8293"} Nov 28 11:24:21 crc kubenswrapper[4772]: I1128 11:24:21.829390 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"7015bbf6863b2a8d33650c85e4706c6f8764f62eeaf2215910a4249b3767e4cb"} Nov 28 11:24:21 crc kubenswrapper[4772]: I1128 11:24:21.829431 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"a3fad93208cdfac44467d5f9adce1ea525f69f39f5162437a35348746c1662b9"} Nov 28 11:24:21 crc kubenswrapper[4772]: I1128 11:24:21.829446 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e9a12326-4e22-49fc-a6d5-b103867d9d0c","Type":"ContainerStarted","Data":"69bfd53b70b4e90a00f6962e6a74a2a6696faacbc77f5f7f4b751a7afd8dbfcc"} Nov 28 11:24:21 crc kubenswrapper[4772]: I1128 11:24:21.883145 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=23.858011302 podStartE2EDuration="34.883115497s" podCreationTimestamp="2025-11-28 11:23:47 +0000 UTC" firstStartedPulling="2025-11-28 11:24:09.05692125 +0000 UTC m=+1047.380164527" lastFinishedPulling="2025-11-28 11:24:20.082025495 +0000 UTC m=+1058.405268722" observedRunningTime="2025-11-28 11:24:21.880778113 +0000 UTC m=+1060.204021340" watchObservedRunningTime="2025-11-28 11:24:21.883115497 +0000 UTC m=+1060.206358724" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.218169 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-h4ngb"] Nov 28 11:24:22 crc kubenswrapper[4772]: E1128 11:24:22.218583 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1986ff74-eed0-4c33-bba8-f3c33a6492c0" containerName="ovn-config" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.218599 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1986ff74-eed0-4c33-bba8-f3c33a6492c0" containerName="ovn-config" Nov 28 11:24:22 crc kubenswrapper[4772]: E1128 11:24:22.218648 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6" containerName="swift-ring-rebalance" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.218656 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6" containerName="swift-ring-rebalance" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.218856 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6" containerName="swift-ring-rebalance" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.218872 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1986ff74-eed0-4c33-bba8-f3c33a6492c0" containerName="ovn-config" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.219760 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.222636 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.281038 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-h4ngb"] Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.360448 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.361276 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-config\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.361537 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.361708 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.361841 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4hmh\" (UniqueName: \"kubernetes.io/projected/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-kube-api-access-d4hmh\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.361977 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.463330 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.463400 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.463437 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4hmh\" (UniqueName: \"kubernetes.io/projected/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-kube-api-access-d4hmh\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.463469 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.463532 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.463555 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-config\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.464842 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-config\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.464931 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.465046 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.465396 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.465729 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.499776 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4hmh\" (UniqueName: \"kubernetes.io/projected/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-kube-api-access-d4hmh\") pod \"dnsmasq-dns-5c79d794d7-h4ngb\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.540099 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.840313 4772 generic.go:334] "Generic (PLEG): container finished" podID="09675021-323d-41ee-aaaa-bee4e83e2544" containerID="81ca325c59ac763671d22b789a19f04c989f66fd290244f2e41ab1903d6eb651" exitCode=0 Nov 28 11:24:22 crc kubenswrapper[4772]: I1128 11:24:22.840396 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r6s87" event={"ID":"09675021-323d-41ee-aaaa-bee4e83e2544","Type":"ContainerDied","Data":"81ca325c59ac763671d22b789a19f04c989f66fd290244f2e41ab1903d6eb651"} Nov 28 11:24:23 crc kubenswrapper[4772]: I1128 11:24:23.083229 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-h4ngb"] Nov 28 11:24:23 crc kubenswrapper[4772]: W1128 11:24:23.086039 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9e7ea1a_856b_49fd_8b3d_7799b253c6ae.slice/crio-ec964f10756688abdd52ea8c4f12e7d9d2fdab537dfd80d5dbf34081e3b30afb WatchSource:0}: Error finding container ec964f10756688abdd52ea8c4f12e7d9d2fdab537dfd80d5dbf34081e3b30afb: Status 404 returned error can't find the container with id ec964f10756688abdd52ea8c4f12e7d9d2fdab537dfd80d5dbf34081e3b30afb Nov 28 11:24:23 crc kubenswrapper[4772]: I1128 11:24:23.850130 4772 generic.go:334] "Generic (PLEG): container finished" podID="e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" containerID="80b26ae13322756f13c84ce83acbd2139e18e98c8118d0c25e7e720fc70ca215" exitCode=0 Nov 28 11:24:23 crc kubenswrapper[4772]: I1128 11:24:23.850242 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" event={"ID":"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae","Type":"ContainerDied","Data":"80b26ae13322756f13c84ce83acbd2139e18e98c8118d0c25e7e720fc70ca215"} Nov 28 11:24:23 crc kubenswrapper[4772]: I1128 11:24:23.850545 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" event={"ID":"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae","Type":"ContainerStarted","Data":"ec964f10756688abdd52ea8c4f12e7d9d2fdab537dfd80d5dbf34081e3b30afb"} Nov 28 11:24:23 crc kubenswrapper[4772]: I1128 11:24:23.898884 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:24:23 crc kubenswrapper[4772]: I1128 11:24:23.898957 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:24:23 crc kubenswrapper[4772]: I1128 11:24:23.899027 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:24:23 crc kubenswrapper[4772]: I1128 11:24:23.899897 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"146105a3e2de3e49e98dafee8802eaebe7226a811726066f96e02933b7de92a2"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:24:23 crc kubenswrapper[4772]: I1128 11:24:23.899955 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://146105a3e2de3e49e98dafee8802eaebe7226a811726066f96e02933b7de92a2" gracePeriod=600 Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.238940 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r6s87" Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.398213 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-db-sync-config-data\") pod \"09675021-323d-41ee-aaaa-bee4e83e2544\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.398291 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-config-data\") pod \"09675021-323d-41ee-aaaa-bee4e83e2544\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.398336 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-combined-ca-bundle\") pod \"09675021-323d-41ee-aaaa-bee4e83e2544\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.398447 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kdt6\" (UniqueName: \"kubernetes.io/projected/09675021-323d-41ee-aaaa-bee4e83e2544-kube-api-access-7kdt6\") pod \"09675021-323d-41ee-aaaa-bee4e83e2544\" (UID: \"09675021-323d-41ee-aaaa-bee4e83e2544\") " Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.405264 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09675021-323d-41ee-aaaa-bee4e83e2544-kube-api-access-7kdt6" (OuterVolumeSpecName: "kube-api-access-7kdt6") pod "09675021-323d-41ee-aaaa-bee4e83e2544" (UID: "09675021-323d-41ee-aaaa-bee4e83e2544"). InnerVolumeSpecName "kube-api-access-7kdt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.405277 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "09675021-323d-41ee-aaaa-bee4e83e2544" (UID: "09675021-323d-41ee-aaaa-bee4e83e2544"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.430976 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09675021-323d-41ee-aaaa-bee4e83e2544" (UID: "09675021-323d-41ee-aaaa-bee4e83e2544"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.443471 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-config-data" (OuterVolumeSpecName: "config-data") pod "09675021-323d-41ee-aaaa-bee4e83e2544" (UID: "09675021-323d-41ee-aaaa-bee4e83e2544"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.500777 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.500823 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kdt6\" (UniqueName: \"kubernetes.io/projected/09675021-323d-41ee-aaaa-bee4e83e2544-kube-api-access-7kdt6\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.500835 4772 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.500844 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09675021-323d-41ee-aaaa-bee4e83e2544-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.862261 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" event={"ID":"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae","Type":"ContainerStarted","Data":"9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a"} Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.865335 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r6s87" Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.865566 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r6s87" event={"ID":"09675021-323d-41ee-aaaa-bee4e83e2544","Type":"ContainerDied","Data":"9575be3c4cf9b38c451d26c9ffced81e0094d6c2d17498b31edf41df229a0f82"} Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.865909 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9575be3c4cf9b38c451d26c9ffced81e0094d6c2d17498b31edf41df229a0f82" Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.869395 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="146105a3e2de3e49e98dafee8802eaebe7226a811726066f96e02933b7de92a2" exitCode=0 Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.869456 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"146105a3e2de3e49e98dafee8802eaebe7226a811726066f96e02933b7de92a2"} Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.869502 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"da141ddbcec3414566d62a4d52bc506fe611de913cb5db4b82768e912f8c16c0"} Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.869527 4772 scope.go:117] "RemoveContainer" containerID="113f0827970f37075ba4d848729cf75c46d547c1a460d92b4daa91c0fd781747" Nov 28 11:24:24 crc kubenswrapper[4772]: I1128 11:24:24.891389 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" podStartSLOduration=2.891331778 podStartE2EDuration="2.891331778s" podCreationTimestamp="2025-11-28 11:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:24:24.88479383 +0000 UTC m=+1063.208037097" watchObservedRunningTime="2025-11-28 11:24:24.891331778 +0000 UTC m=+1063.214575005" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.313890 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-h4ngb"] Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.362746 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-2wtm9"] Nov 28 11:24:25 crc kubenswrapper[4772]: E1128 11:24:25.363152 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09675021-323d-41ee-aaaa-bee4e83e2544" containerName="glance-db-sync" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.363171 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="09675021-323d-41ee-aaaa-bee4e83e2544" containerName="glance-db-sync" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.363336 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="09675021-323d-41ee-aaaa-bee4e83e2544" containerName="glance-db-sync" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.364280 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.397326 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-2wtm9"] Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.421857 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-config\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.421923 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.421955 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qfl8\" (UniqueName: \"kubernetes.io/projected/ee784ed1-af35-4cee-93fc-23d12c813dc6-kube-api-access-7qfl8\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.421983 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.421998 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.422043 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.524401 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.524499 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-config\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.524543 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.524581 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qfl8\" (UniqueName: \"kubernetes.io/projected/ee784ed1-af35-4cee-93fc-23d12c813dc6-kube-api-access-7qfl8\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.524615 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.524634 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.525570 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.525619 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.526256 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-config\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.526583 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.526695 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.550541 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qfl8\" (UniqueName: \"kubernetes.io/projected/ee784ed1-af35-4cee-93fc-23d12c813dc6-kube-api-access-7qfl8\") pod \"dnsmasq-dns-5f59b8f679-2wtm9\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.685109 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:25 crc kubenswrapper[4772]: I1128 11:24:25.919855 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:26 crc kubenswrapper[4772]: I1128 11:24:26.249116 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-2wtm9"] Nov 28 11:24:26 crc kubenswrapper[4772]: W1128 11:24:26.258258 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee784ed1_af35_4cee_93fc_23d12c813dc6.slice/crio-012037eb94f0d37110f3bd7cedff722654a14d7f054c5e0f3da11052b9514c27 WatchSource:0}: Error finding container 012037eb94f0d37110f3bd7cedff722654a14d7f054c5e0f3da11052b9514c27: Status 404 returned error can't find the container with id 012037eb94f0d37110f3bd7cedff722654a14d7f054c5e0f3da11052b9514c27 Nov 28 11:24:26 crc kubenswrapper[4772]: I1128 11:24:26.930398 4772 generic.go:334] "Generic (PLEG): container finished" podID="ee784ed1-af35-4cee-93fc-23d12c813dc6" containerID="3faea0f432c35cca5d973248386014d0e4a7d8ba9f3c358c9a0245ab4355fa28" exitCode=0 Nov 28 11:24:26 crc kubenswrapper[4772]: I1128 11:24:26.930481 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" event={"ID":"ee784ed1-af35-4cee-93fc-23d12c813dc6","Type":"ContainerDied","Data":"3faea0f432c35cca5d973248386014d0e4a7d8ba9f3c358c9a0245ab4355fa28"} Nov 28 11:24:26 crc kubenswrapper[4772]: I1128 11:24:26.931070 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" event={"ID":"ee784ed1-af35-4cee-93fc-23d12c813dc6","Type":"ContainerStarted","Data":"012037eb94f0d37110f3bd7cedff722654a14d7f054c5e0f3da11052b9514c27"} Nov 28 11:24:26 crc kubenswrapper[4772]: I1128 11:24:26.931307 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" podUID="e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" containerName="dnsmasq-dns" containerID="cri-o://9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a" gracePeriod=10 Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.430741 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.603329 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-config\") pod \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.603413 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-ovsdbserver-sb\") pod \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.603484 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-dns-swift-storage-0\") pod \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.603565 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-dns-svc\") pod \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.603603 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-ovsdbserver-nb\") pod \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.603667 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4hmh\" (UniqueName: \"kubernetes.io/projected/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-kube-api-access-d4hmh\") pod \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\" (UID: \"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae\") " Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.613987 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-kube-api-access-d4hmh" (OuterVolumeSpecName: "kube-api-access-d4hmh") pod "e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" (UID: "e9e7ea1a-856b-49fd-8b3d-7799b253c6ae"). InnerVolumeSpecName "kube-api-access-d4hmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.657757 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" (UID: "e9e7ea1a-856b-49fd-8b3d-7799b253c6ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.667877 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-config" (OuterVolumeSpecName: "config") pod "e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" (UID: "e9e7ea1a-856b-49fd-8b3d-7799b253c6ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.672867 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" (UID: "e9e7ea1a-856b-49fd-8b3d-7799b253c6ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.685812 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" (UID: "e9e7ea1a-856b-49fd-8b3d-7799b253c6ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.705498 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" (UID: "e9e7ea1a-856b-49fd-8b3d-7799b253c6ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.706161 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.706209 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.706223 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.706237 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.706249 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.706262 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4hmh\" (UniqueName: \"kubernetes.io/projected/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae-kube-api-access-d4hmh\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.941337 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" event={"ID":"ee784ed1-af35-4cee-93fc-23d12c813dc6","Type":"ContainerStarted","Data":"dbf958614ee83e249e47ffe11e9a2630f93158f35362cf381c4cbf54cd4f9ca9"} Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.943099 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.944753 4772 generic.go:334] "Generic (PLEG): container finished" podID="e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" containerID="9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a" exitCode=0 Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.944790 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" event={"ID":"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae","Type":"ContainerDied","Data":"9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a"} Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.944808 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" event={"ID":"e9e7ea1a-856b-49fd-8b3d-7799b253c6ae","Type":"ContainerDied","Data":"ec964f10756688abdd52ea8c4f12e7d9d2fdab537dfd80d5dbf34081e3b30afb"} Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.944830 4772 scope.go:117] "RemoveContainer" containerID="9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.944956 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-h4ngb" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.966621 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" podStartSLOduration=2.966315477 podStartE2EDuration="2.966315477s" podCreationTimestamp="2025-11-28 11:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:24:27.961442155 +0000 UTC m=+1066.284685392" watchObservedRunningTime="2025-11-28 11:24:27.966315477 +0000 UTC m=+1066.289558704" Nov 28 11:24:27 crc kubenswrapper[4772]: I1128 11:24:27.981591 4772 scope.go:117] "RemoveContainer" containerID="80b26ae13322756f13c84ce83acbd2139e18e98c8118d0c25e7e720fc70ca215" Nov 28 11:24:28 crc kubenswrapper[4772]: I1128 11:24:28.008526 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-h4ngb"] Nov 28 11:24:28 crc kubenswrapper[4772]: I1128 11:24:28.013336 4772 scope.go:117] "RemoveContainer" containerID="9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a" Nov 28 11:24:28 crc kubenswrapper[4772]: E1128 11:24:28.013873 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a\": container with ID starting with 9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a not found: ID does not exist" containerID="9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a" Nov 28 11:24:28 crc kubenswrapper[4772]: I1128 11:24:28.013910 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a"} err="failed to get container status \"9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a\": rpc error: code = NotFound desc = could not find container \"9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a\": container with ID starting with 9aa0daa07830e4d4f243979085e0ede02498c80b3b1f63c82ca9b47bbec6de9a not found: ID does not exist" Nov 28 11:24:28 crc kubenswrapper[4772]: I1128 11:24:28.013943 4772 scope.go:117] "RemoveContainer" containerID="80b26ae13322756f13c84ce83acbd2139e18e98c8118d0c25e7e720fc70ca215" Nov 28 11:24:28 crc kubenswrapper[4772]: E1128 11:24:28.014304 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80b26ae13322756f13c84ce83acbd2139e18e98c8118d0c25e7e720fc70ca215\": container with ID starting with 80b26ae13322756f13c84ce83acbd2139e18e98c8118d0c25e7e720fc70ca215 not found: ID does not exist" containerID="80b26ae13322756f13c84ce83acbd2139e18e98c8118d0c25e7e720fc70ca215" Nov 28 11:24:28 crc kubenswrapper[4772]: I1128 11:24:28.014339 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80b26ae13322756f13c84ce83acbd2139e18e98c8118d0c25e7e720fc70ca215"} err="failed to get container status \"80b26ae13322756f13c84ce83acbd2139e18e98c8118d0c25e7e720fc70ca215\": rpc error: code = NotFound desc = could not find container \"80b26ae13322756f13c84ce83acbd2139e18e98c8118d0c25e7e720fc70ca215\": container with ID starting with 80b26ae13322756f13c84ce83acbd2139e18e98c8118d0c25e7e720fc70ca215 not found: ID does not exist" Nov 28 11:24:28 crc kubenswrapper[4772]: I1128 11:24:28.016482 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-h4ngb"] Nov 28 11:24:30 crc kubenswrapper[4772]: I1128 11:24:30.011505 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" path="/var/lib/kubelet/pods/e9e7ea1a-856b-49fd-8b3d-7799b253c6ae/volumes" Nov 28 11:24:31 crc kubenswrapper[4772]: I1128 11:24:31.235733 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:24:31 crc kubenswrapper[4772]: I1128 11:24:31.761589 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.815922 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-r2v2z"] Nov 28 11:24:33 crc kubenswrapper[4772]: E1128 11:24:33.816944 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" containerName="init" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.816971 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" containerName="init" Nov 28 11:24:33 crc kubenswrapper[4772]: E1128 11:24:33.817008 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" containerName="dnsmasq-dns" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.817017 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" containerName="dnsmasq-dns" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.817270 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e7ea1a-856b-49fd-8b3d-7799b253c6ae" containerName="dnsmasq-dns" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.818099 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-r2v2z" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.829507 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-001d-account-create-update-4jcc5"] Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.831294 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-001d-account-create-update-4jcc5" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.834443 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.848715 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-r2v2z"] Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.859232 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-001d-account-create-update-4jcc5"] Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.933298 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b99c928-e317-4266-a09f-5e3a2d0eb4b5-operator-scripts\") pod \"cinder-001d-account-create-update-4jcc5\" (UID: \"4b99c928-e317-4266-a09f-5e3a2d0eb4b5\") " pod="openstack/cinder-001d-account-create-update-4jcc5" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.933387 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9flc\" (UniqueName: \"kubernetes.io/projected/4b99c928-e317-4266-a09f-5e3a2d0eb4b5-kube-api-access-n9flc\") pod \"cinder-001d-account-create-update-4jcc5\" (UID: \"4b99c928-e317-4266-a09f-5e3a2d0eb4b5\") " pod="openstack/cinder-001d-account-create-update-4jcc5" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.933431 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtlgk\" (UniqueName: \"kubernetes.io/projected/1318204d-e75b-4ba8-b804-298eded6f129-kube-api-access-gtlgk\") pod \"cinder-db-create-r2v2z\" (UID: \"1318204d-e75b-4ba8-b804-298eded6f129\") " pod="openstack/cinder-db-create-r2v2z" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.933553 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1318204d-e75b-4ba8-b804-298eded6f129-operator-scripts\") pod \"cinder-db-create-r2v2z\" (UID: \"1318204d-e75b-4ba8-b804-298eded6f129\") " pod="openstack/cinder-db-create-r2v2z" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.944056 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-2plvb"] Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.945426 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2plvb" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.952346 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4e7b-account-create-update-nd78w"] Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.953731 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e7b-account-create-update-nd78w" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.956372 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 28 11:24:33 crc kubenswrapper[4772]: I1128 11:24:33.961392 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2plvb"] Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.006610 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4e7b-account-create-update-nd78w"] Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.035211 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b99c928-e317-4266-a09f-5e3a2d0eb4b5-operator-scripts\") pod \"cinder-001d-account-create-update-4jcc5\" (UID: \"4b99c928-e317-4266-a09f-5e3a2d0eb4b5\") " pod="openstack/cinder-001d-account-create-update-4jcc5" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.035286 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a4b747-c5a6-41c0-86cf-895a5fff2470-operator-scripts\") pod \"barbican-db-create-2plvb\" (UID: \"75a4b747-c5a6-41c0-86cf-895a5fff2470\") " pod="openstack/barbican-db-create-2plvb" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.035340 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9flc\" (UniqueName: \"kubernetes.io/projected/4b99c928-e317-4266-a09f-5e3a2d0eb4b5-kube-api-access-n9flc\") pod \"cinder-001d-account-create-update-4jcc5\" (UID: \"4b99c928-e317-4266-a09f-5e3a2d0eb4b5\") " pod="openstack/cinder-001d-account-create-update-4jcc5" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.035404 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtlgk\" (UniqueName: \"kubernetes.io/projected/1318204d-e75b-4ba8-b804-298eded6f129-kube-api-access-gtlgk\") pod \"cinder-db-create-r2v2z\" (UID: \"1318204d-e75b-4ba8-b804-298eded6f129\") " pod="openstack/cinder-db-create-r2v2z" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.035433 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2tmw\" (UniqueName: \"kubernetes.io/projected/eaf64ee3-3552-488c-9dff-15dc31783f22-kube-api-access-k2tmw\") pod \"barbican-4e7b-account-create-update-nd78w\" (UID: \"eaf64ee3-3552-488c-9dff-15dc31783f22\") " pod="openstack/barbican-4e7b-account-create-update-nd78w" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.035479 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaf64ee3-3552-488c-9dff-15dc31783f22-operator-scripts\") pod \"barbican-4e7b-account-create-update-nd78w\" (UID: \"eaf64ee3-3552-488c-9dff-15dc31783f22\") " pod="openstack/barbican-4e7b-account-create-update-nd78w" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.035496 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdsbv\" (UniqueName: \"kubernetes.io/projected/75a4b747-c5a6-41c0-86cf-895a5fff2470-kube-api-access-cdsbv\") pod \"barbican-db-create-2plvb\" (UID: \"75a4b747-c5a6-41c0-86cf-895a5fff2470\") " pod="openstack/barbican-db-create-2plvb" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.035561 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1318204d-e75b-4ba8-b804-298eded6f129-operator-scripts\") pod \"cinder-db-create-r2v2z\" (UID: \"1318204d-e75b-4ba8-b804-298eded6f129\") " pod="openstack/cinder-db-create-r2v2z" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.036158 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b99c928-e317-4266-a09f-5e3a2d0eb4b5-operator-scripts\") pod \"cinder-001d-account-create-update-4jcc5\" (UID: \"4b99c928-e317-4266-a09f-5e3a2d0eb4b5\") " pod="openstack/cinder-001d-account-create-update-4jcc5" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.036472 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1318204d-e75b-4ba8-b804-298eded6f129-operator-scripts\") pod \"cinder-db-create-r2v2z\" (UID: \"1318204d-e75b-4ba8-b804-298eded6f129\") " pod="openstack/cinder-db-create-r2v2z" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.060350 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtlgk\" (UniqueName: \"kubernetes.io/projected/1318204d-e75b-4ba8-b804-298eded6f129-kube-api-access-gtlgk\") pod \"cinder-db-create-r2v2z\" (UID: \"1318204d-e75b-4ba8-b804-298eded6f129\") " pod="openstack/cinder-db-create-r2v2z" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.060832 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9flc\" (UniqueName: \"kubernetes.io/projected/4b99c928-e317-4266-a09f-5e3a2d0eb4b5-kube-api-access-n9flc\") pod \"cinder-001d-account-create-update-4jcc5\" (UID: \"4b99c928-e317-4266-a09f-5e3a2d0eb4b5\") " pod="openstack/cinder-001d-account-create-update-4jcc5" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.098920 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-86jrn"] Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.100080 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.102165 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.102464 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4j4h4" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.103644 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.103911 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.120944 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-86jrn"] Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.137315 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806ae6be-af8d-492c-8556-797040276b12-config-data\") pod \"keystone-db-sync-86jrn\" (UID: \"806ae6be-af8d-492c-8556-797040276b12\") " pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.137414 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2tmw\" (UniqueName: \"kubernetes.io/projected/eaf64ee3-3552-488c-9dff-15dc31783f22-kube-api-access-k2tmw\") pod \"barbican-4e7b-account-create-update-nd78w\" (UID: \"eaf64ee3-3552-488c-9dff-15dc31783f22\") " pod="openstack/barbican-4e7b-account-create-update-nd78w" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.137447 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806ae6be-af8d-492c-8556-797040276b12-combined-ca-bundle\") pod \"keystone-db-sync-86jrn\" (UID: \"806ae6be-af8d-492c-8556-797040276b12\") " pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.137467 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaf64ee3-3552-488c-9dff-15dc31783f22-operator-scripts\") pod \"barbican-4e7b-account-create-update-nd78w\" (UID: \"eaf64ee3-3552-488c-9dff-15dc31783f22\") " pod="openstack/barbican-4e7b-account-create-update-nd78w" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.137487 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdsbv\" (UniqueName: \"kubernetes.io/projected/75a4b747-c5a6-41c0-86cf-895a5fff2470-kube-api-access-cdsbv\") pod \"barbican-db-create-2plvb\" (UID: \"75a4b747-c5a6-41c0-86cf-895a5fff2470\") " pod="openstack/barbican-db-create-2plvb" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.137524 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqv7h\" (UniqueName: \"kubernetes.io/projected/806ae6be-af8d-492c-8556-797040276b12-kube-api-access-qqv7h\") pod \"keystone-db-sync-86jrn\" (UID: \"806ae6be-af8d-492c-8556-797040276b12\") " pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.137583 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a4b747-c5a6-41c0-86cf-895a5fff2470-operator-scripts\") pod \"barbican-db-create-2plvb\" (UID: \"75a4b747-c5a6-41c0-86cf-895a5fff2470\") " pod="openstack/barbican-db-create-2plvb" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.138238 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a4b747-c5a6-41c0-86cf-895a5fff2470-operator-scripts\") pod \"barbican-db-create-2plvb\" (UID: \"75a4b747-c5a6-41c0-86cf-895a5fff2470\") " pod="openstack/barbican-db-create-2plvb" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.139183 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaf64ee3-3552-488c-9dff-15dc31783f22-operator-scripts\") pod \"barbican-4e7b-account-create-update-nd78w\" (UID: \"eaf64ee3-3552-488c-9dff-15dc31783f22\") " pod="openstack/barbican-4e7b-account-create-update-nd78w" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.140605 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-r2v2z" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.154013 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-001d-account-create-update-4jcc5" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.157633 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdsbv\" (UniqueName: \"kubernetes.io/projected/75a4b747-c5a6-41c0-86cf-895a5fff2470-kube-api-access-cdsbv\") pod \"barbican-db-create-2plvb\" (UID: \"75a4b747-c5a6-41c0-86cf-895a5fff2470\") " pod="openstack/barbican-db-create-2plvb" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.157637 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2tmw\" (UniqueName: \"kubernetes.io/projected/eaf64ee3-3552-488c-9dff-15dc31783f22-kube-api-access-k2tmw\") pod \"barbican-4e7b-account-create-update-nd78w\" (UID: \"eaf64ee3-3552-488c-9dff-15dc31783f22\") " pod="openstack/barbican-4e7b-account-create-update-nd78w" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.228947 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-qnrnt"] Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.230341 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qnrnt" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.239623 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806ae6be-af8d-492c-8556-797040276b12-config-data\") pod \"keystone-db-sync-86jrn\" (UID: \"806ae6be-af8d-492c-8556-797040276b12\") " pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.239710 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806ae6be-af8d-492c-8556-797040276b12-combined-ca-bundle\") pod \"keystone-db-sync-86jrn\" (UID: \"806ae6be-af8d-492c-8556-797040276b12\") " pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.239759 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqv7h\" (UniqueName: \"kubernetes.io/projected/806ae6be-af8d-492c-8556-797040276b12-kube-api-access-qqv7h\") pod \"keystone-db-sync-86jrn\" (UID: \"806ae6be-af8d-492c-8556-797040276b12\") " pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.244464 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806ae6be-af8d-492c-8556-797040276b12-config-data\") pod \"keystone-db-sync-86jrn\" (UID: \"806ae6be-af8d-492c-8556-797040276b12\") " pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.245078 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806ae6be-af8d-492c-8556-797040276b12-combined-ca-bundle\") pod \"keystone-db-sync-86jrn\" (UID: \"806ae6be-af8d-492c-8556-797040276b12\") " pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.245568 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0617-account-create-update-vwt74"] Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.251916 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0617-account-create-update-vwt74" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.254166 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.260752 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqv7h\" (UniqueName: \"kubernetes.io/projected/806ae6be-af8d-492c-8556-797040276b12-kube-api-access-qqv7h\") pod \"keystone-db-sync-86jrn\" (UID: \"806ae6be-af8d-492c-8556-797040276b12\") " pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.265072 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0617-account-create-update-vwt74"] Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.268384 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2plvb" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.280591 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qnrnt"] Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.290757 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e7b-account-create-update-nd78w" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.349265 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttq2c\" (UniqueName: \"kubernetes.io/projected/fa0c2253-60c9-48a0-ab78-8a63f736b36f-kube-api-access-ttq2c\") pod \"neutron-db-create-qnrnt\" (UID: \"fa0c2253-60c9-48a0-ab78-8a63f736b36f\") " pod="openstack/neutron-db-create-qnrnt" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.349458 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa0c2253-60c9-48a0-ab78-8a63f736b36f-operator-scripts\") pod \"neutron-db-create-qnrnt\" (UID: \"fa0c2253-60c9-48a0-ab78-8a63f736b36f\") " pod="openstack/neutron-db-create-qnrnt" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.349549 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl5jb\" (UniqueName: \"kubernetes.io/projected/3b4efdbb-8b77-4889-be40-c74e7eac7392-kube-api-access-fl5jb\") pod \"neutron-0617-account-create-update-vwt74\" (UID: \"3b4efdbb-8b77-4889-be40-c74e7eac7392\") " pod="openstack/neutron-0617-account-create-update-vwt74" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.349767 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4efdbb-8b77-4889-be40-c74e7eac7392-operator-scripts\") pod \"neutron-0617-account-create-update-vwt74\" (UID: \"3b4efdbb-8b77-4889-be40-c74e7eac7392\") " pod="openstack/neutron-0617-account-create-update-vwt74" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.438780 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.452982 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4efdbb-8b77-4889-be40-c74e7eac7392-operator-scripts\") pod \"neutron-0617-account-create-update-vwt74\" (UID: \"3b4efdbb-8b77-4889-be40-c74e7eac7392\") " pod="openstack/neutron-0617-account-create-update-vwt74" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.453071 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttq2c\" (UniqueName: \"kubernetes.io/projected/fa0c2253-60c9-48a0-ab78-8a63f736b36f-kube-api-access-ttq2c\") pod \"neutron-db-create-qnrnt\" (UID: \"fa0c2253-60c9-48a0-ab78-8a63f736b36f\") " pod="openstack/neutron-db-create-qnrnt" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.453093 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa0c2253-60c9-48a0-ab78-8a63f736b36f-operator-scripts\") pod \"neutron-db-create-qnrnt\" (UID: \"fa0c2253-60c9-48a0-ab78-8a63f736b36f\") " pod="openstack/neutron-db-create-qnrnt" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.453169 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl5jb\" (UniqueName: \"kubernetes.io/projected/3b4efdbb-8b77-4889-be40-c74e7eac7392-kube-api-access-fl5jb\") pod \"neutron-0617-account-create-update-vwt74\" (UID: \"3b4efdbb-8b77-4889-be40-c74e7eac7392\") " pod="openstack/neutron-0617-account-create-update-vwt74" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.453947 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa0c2253-60c9-48a0-ab78-8a63f736b36f-operator-scripts\") pod \"neutron-db-create-qnrnt\" (UID: \"fa0c2253-60c9-48a0-ab78-8a63f736b36f\") " pod="openstack/neutron-db-create-qnrnt" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.453979 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4efdbb-8b77-4889-be40-c74e7eac7392-operator-scripts\") pod \"neutron-0617-account-create-update-vwt74\" (UID: \"3b4efdbb-8b77-4889-be40-c74e7eac7392\") " pod="openstack/neutron-0617-account-create-update-vwt74" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.488430 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttq2c\" (UniqueName: \"kubernetes.io/projected/fa0c2253-60c9-48a0-ab78-8a63f736b36f-kube-api-access-ttq2c\") pod \"neutron-db-create-qnrnt\" (UID: \"fa0c2253-60c9-48a0-ab78-8a63f736b36f\") " pod="openstack/neutron-db-create-qnrnt" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.491997 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl5jb\" (UniqueName: \"kubernetes.io/projected/3b4efdbb-8b77-4889-be40-c74e7eac7392-kube-api-access-fl5jb\") pod \"neutron-0617-account-create-update-vwt74\" (UID: \"3b4efdbb-8b77-4889-be40-c74e7eac7392\") " pod="openstack/neutron-0617-account-create-update-vwt74" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.583317 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-001d-account-create-update-4jcc5"] Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.594539 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qnrnt" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.611135 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0617-account-create-update-vwt74" Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.661290 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-r2v2z"] Nov 28 11:24:34 crc kubenswrapper[4772]: W1128 11:24:34.676003 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1318204d_e75b_4ba8_b804_298eded6f129.slice/crio-968b819876ade473bb3822836534b9dfd0966a2fa6165819a5ebb18627c5a7d1 WatchSource:0}: Error finding container 968b819876ade473bb3822836534b9dfd0966a2fa6165819a5ebb18627c5a7d1: Status 404 returned error can't find the container with id 968b819876ade473bb3822836534b9dfd0966a2fa6165819a5ebb18627c5a7d1 Nov 28 11:24:34 crc kubenswrapper[4772]: I1128 11:24:34.968106 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4e7b-account-create-update-nd78w"] Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.000690 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2plvb"] Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.046442 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2plvb" event={"ID":"75a4b747-c5a6-41c0-86cf-895a5fff2470","Type":"ContainerStarted","Data":"67f811002dcf9a8eec90146bbaaf16da41c9843790f7d4711687110246cc1651"} Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.056039 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-r2v2z" event={"ID":"1318204d-e75b-4ba8-b804-298eded6f129","Type":"ContainerStarted","Data":"ffe0b40929021d0e4c16cb9e25cf459c2b460e76b1b15f02f8b43f6e8ab6d00d"} Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.056093 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-r2v2z" event={"ID":"1318204d-e75b-4ba8-b804-298eded6f129","Type":"ContainerStarted","Data":"968b819876ade473bb3822836534b9dfd0966a2fa6165819a5ebb18627c5a7d1"} Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.067008 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-001d-account-create-update-4jcc5" event={"ID":"4b99c928-e317-4266-a09f-5e3a2d0eb4b5","Type":"ContainerStarted","Data":"0b3876f39bdd78130314285e169be6cb969844d2429e4ee9989ad80b993a8893"} Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.067071 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-001d-account-create-update-4jcc5" event={"ID":"4b99c928-e317-4266-a09f-5e3a2d0eb4b5","Type":"ContainerStarted","Data":"74910e67e3be794fa1deecd8d9e9ddf28e4b6ab3ec312e23dd2098d29b7041d7"} Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.067831 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-86jrn"] Nov 28 11:24:35 crc kubenswrapper[4772]: W1128 11:24:35.072160 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod806ae6be_af8d_492c_8556_797040276b12.slice/crio-6cea3f434f711fbfb411199b54479c76841fe6b35f057494cb9349d3e5138e12 WatchSource:0}: Error finding container 6cea3f434f711fbfb411199b54479c76841fe6b35f057494cb9349d3e5138e12: Status 404 returned error can't find the container with id 6cea3f434f711fbfb411199b54479c76841fe6b35f057494cb9349d3e5138e12 Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.096138 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-r2v2z" podStartSLOduration=2.096097295 podStartE2EDuration="2.096097295s" podCreationTimestamp="2025-11-28 11:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:24:35.078443665 +0000 UTC m=+1073.401686892" watchObservedRunningTime="2025-11-28 11:24:35.096097295 +0000 UTC m=+1073.419340522" Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.129957 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-001d-account-create-update-4jcc5" podStartSLOduration=2.129899556 podStartE2EDuration="2.129899556s" podCreationTimestamp="2025-11-28 11:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:24:35.107831915 +0000 UTC m=+1073.431075132" watchObservedRunningTime="2025-11-28 11:24:35.129899556 +0000 UTC m=+1073.453142793" Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.139141 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0617-account-create-update-vwt74"] Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.177798 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qnrnt"] Nov 28 11:24:35 crc kubenswrapper[4772]: W1128 11:24:35.183934 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa0c2253_60c9_48a0_ab78_8a63f736b36f.slice/crio-0ecabb54b2ab490ff510b555dfce1f7501843ea272191c87ec0877517630b0a2 WatchSource:0}: Error finding container 0ecabb54b2ab490ff510b555dfce1f7501843ea272191c87ec0877517630b0a2: Status 404 returned error can't find the container with id 0ecabb54b2ab490ff510b555dfce1f7501843ea272191c87ec0877517630b0a2 Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.690413 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.753577 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9vs4q"] Nov 28 11:24:35 crc kubenswrapper[4772]: I1128 11:24:35.753889 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" podUID="dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" containerName="dnsmasq-dns" containerID="cri-o://0521a88c714dc05b0b9abe12b7e8373b5579b11282455add165c1e84e55cbf5f" gracePeriod=10 Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.115054 4772 generic.go:334] "Generic (PLEG): container finished" podID="fa0c2253-60c9-48a0-ab78-8a63f736b36f" containerID="dbd0913af328e934d9264548f5550d72f9170bcb4051297c6714b11e801592d9" exitCode=0 Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.115176 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qnrnt" event={"ID":"fa0c2253-60c9-48a0-ab78-8a63f736b36f","Type":"ContainerDied","Data":"dbd0913af328e934d9264548f5550d72f9170bcb4051297c6714b11e801592d9"} Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.115257 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qnrnt" event={"ID":"fa0c2253-60c9-48a0-ab78-8a63f736b36f","Type":"ContainerStarted","Data":"0ecabb54b2ab490ff510b555dfce1f7501843ea272191c87ec0877517630b0a2"} Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.117977 4772 generic.go:334] "Generic (PLEG): container finished" podID="75a4b747-c5a6-41c0-86cf-895a5fff2470" containerID="4b4fb445aa7d16168906f40cde0c3a491f8431f39d5008ab6b92fb94242f99fa" exitCode=0 Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.118051 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2plvb" event={"ID":"75a4b747-c5a6-41c0-86cf-895a5fff2470","Type":"ContainerDied","Data":"4b4fb445aa7d16168906f40cde0c3a491f8431f39d5008ab6b92fb94242f99fa"} Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.122430 4772 generic.go:334] "Generic (PLEG): container finished" podID="3b4efdbb-8b77-4889-be40-c74e7eac7392" containerID="6146a10f8f44980196d83eeaac6f5422ff80a5ac6e655bd081c32e8d0722c600" exitCode=0 Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.122492 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0617-account-create-update-vwt74" event={"ID":"3b4efdbb-8b77-4889-be40-c74e7eac7392","Type":"ContainerDied","Data":"6146a10f8f44980196d83eeaac6f5422ff80a5ac6e655bd081c32e8d0722c600"} Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.122510 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0617-account-create-update-vwt74" event={"ID":"3b4efdbb-8b77-4889-be40-c74e7eac7392","Type":"ContainerStarted","Data":"4a61277c9dbda8a321a45224efa44efab371ea5ebf06ab2be428ea4f1655e17a"} Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.125340 4772 generic.go:334] "Generic (PLEG): container finished" podID="eaf64ee3-3552-488c-9dff-15dc31783f22" containerID="fd949075fc1212b789138e63ce1e1be6b059582793649a66619fdeb402308e3b" exitCode=0 Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.125438 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e7b-account-create-update-nd78w" event={"ID":"eaf64ee3-3552-488c-9dff-15dc31783f22","Type":"ContainerDied","Data":"fd949075fc1212b789138e63ce1e1be6b059582793649a66619fdeb402308e3b"} Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.125473 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e7b-account-create-update-nd78w" event={"ID":"eaf64ee3-3552-488c-9dff-15dc31783f22","Type":"ContainerStarted","Data":"6613e45c85a1d01f2573d226a7e72d0e140c4ec91d8c46723d7cf33be4d9782a"} Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.128065 4772 generic.go:334] "Generic (PLEG): container finished" podID="dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" containerID="0521a88c714dc05b0b9abe12b7e8373b5579b11282455add165c1e84e55cbf5f" exitCode=0 Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.128112 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" event={"ID":"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4","Type":"ContainerDied","Data":"0521a88c714dc05b0b9abe12b7e8373b5579b11282455add165c1e84e55cbf5f"} Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.130764 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-86jrn" event={"ID":"806ae6be-af8d-492c-8556-797040276b12","Type":"ContainerStarted","Data":"6cea3f434f711fbfb411199b54479c76841fe6b35f057494cb9349d3e5138e12"} Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.143187 4772 generic.go:334] "Generic (PLEG): container finished" podID="1318204d-e75b-4ba8-b804-298eded6f129" containerID="ffe0b40929021d0e4c16cb9e25cf459c2b460e76b1b15f02f8b43f6e8ab6d00d" exitCode=0 Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.143596 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-r2v2z" event={"ID":"1318204d-e75b-4ba8-b804-298eded6f129","Type":"ContainerDied","Data":"ffe0b40929021d0e4c16cb9e25cf459c2b460e76b1b15f02f8b43f6e8ab6d00d"} Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.145734 4772 generic.go:334] "Generic (PLEG): container finished" podID="4b99c928-e317-4266-a09f-5e3a2d0eb4b5" containerID="0b3876f39bdd78130314285e169be6cb969844d2429e4ee9989ad80b993a8893" exitCode=0 Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.145765 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-001d-account-create-update-4jcc5" event={"ID":"4b99c928-e317-4266-a09f-5e3a2d0eb4b5","Type":"ContainerDied","Data":"0b3876f39bdd78130314285e169be6cb969844d2429e4ee9989ad80b993a8893"} Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.265057 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.293563 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-dns-svc\") pod \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.293725 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-config\") pod \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.293861 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-ovsdbserver-sb\") pod \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.293946 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-ovsdbserver-nb\") pod \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.294174 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsljk\" (UniqueName: \"kubernetes.io/projected/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-kube-api-access-gsljk\") pod \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\" (UID: \"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4\") " Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.301984 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-kube-api-access-gsljk" (OuterVolumeSpecName: "kube-api-access-gsljk") pod "dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" (UID: "dae7cb7c-0b89-4e5c-b8d5-50631df88ac4"). InnerVolumeSpecName "kube-api-access-gsljk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.345263 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-config" (OuterVolumeSpecName: "config") pod "dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" (UID: "dae7cb7c-0b89-4e5c-b8d5-50631df88ac4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.346733 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" (UID: "dae7cb7c-0b89-4e5c-b8d5-50631df88ac4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.354686 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" (UID: "dae7cb7c-0b89-4e5c-b8d5-50631df88ac4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.374026 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" (UID: "dae7cb7c-0b89-4e5c-b8d5-50631df88ac4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.397183 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsljk\" (UniqueName: \"kubernetes.io/projected/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-kube-api-access-gsljk\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.397223 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.397232 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.397241 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:36 crc kubenswrapper[4772]: I1128 11:24:36.397249 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.158489 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" event={"ID":"dae7cb7c-0b89-4e5c-b8d5-50631df88ac4","Type":"ContainerDied","Data":"8113b6cbff9e53cb513947bb3e11a44977c9703b842270ee5f75ca5a7b48028d"} Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.158560 4772 scope.go:117] "RemoveContainer" containerID="0521a88c714dc05b0b9abe12b7e8373b5579b11282455add165c1e84e55cbf5f" Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.158658 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-9vs4q" Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.202488 4772 scope.go:117] "RemoveContainer" containerID="0961d02c2007cad4d5784da8e186ee4cd9319fa79da6e19f2b93737311f54878" Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.220115 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9vs4q"] Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.227625 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9vs4q"] Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.532794 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-001d-account-create-update-4jcc5" Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.646059 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9flc\" (UniqueName: \"kubernetes.io/projected/4b99c928-e317-4266-a09f-5e3a2d0eb4b5-kube-api-access-n9flc\") pod \"4b99c928-e317-4266-a09f-5e3a2d0eb4b5\" (UID: \"4b99c928-e317-4266-a09f-5e3a2d0eb4b5\") " Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.646313 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b99c928-e317-4266-a09f-5e3a2d0eb4b5-operator-scripts\") pod \"4b99c928-e317-4266-a09f-5e3a2d0eb4b5\" (UID: \"4b99c928-e317-4266-a09f-5e3a2d0eb4b5\") " Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.648800 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b99c928-e317-4266-a09f-5e3a2d0eb4b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b99c928-e317-4266-a09f-5e3a2d0eb4b5" (UID: "4b99c928-e317-4266-a09f-5e3a2d0eb4b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.653573 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b99c928-e317-4266-a09f-5e3a2d0eb4b5-kube-api-access-n9flc" (OuterVolumeSpecName: "kube-api-access-n9flc") pod "4b99c928-e317-4266-a09f-5e3a2d0eb4b5" (UID: "4b99c928-e317-4266-a09f-5e3a2d0eb4b5"). InnerVolumeSpecName "kube-api-access-n9flc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.748445 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9flc\" (UniqueName: \"kubernetes.io/projected/4b99c928-e317-4266-a09f-5e3a2d0eb4b5-kube-api-access-n9flc\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:37 crc kubenswrapper[4772]: I1128 11:24:37.748491 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b99c928-e317-4266-a09f-5e3a2d0eb4b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:38 crc kubenswrapper[4772]: I1128 11:24:38.013022 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" path="/var/lib/kubelet/pods/dae7cb7c-0b89-4e5c-b8d5-50631df88ac4/volumes" Nov 28 11:24:38 crc kubenswrapper[4772]: I1128 11:24:38.172795 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-001d-account-create-update-4jcc5" event={"ID":"4b99c928-e317-4266-a09f-5e3a2d0eb4b5","Type":"ContainerDied","Data":"74910e67e3be794fa1deecd8d9e9ddf28e4b6ab3ec312e23dd2098d29b7041d7"} Nov 28 11:24:38 crc kubenswrapper[4772]: I1128 11:24:38.172845 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74910e67e3be794fa1deecd8d9e9ddf28e4b6ab3ec312e23dd2098d29b7041d7" Nov 28 11:24:38 crc kubenswrapper[4772]: I1128 11:24:38.172899 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-001d-account-create-update-4jcc5" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.370963 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qnrnt" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.401211 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa0c2253-60c9-48a0-ab78-8a63f736b36f-operator-scripts\") pod \"fa0c2253-60c9-48a0-ab78-8a63f736b36f\" (UID: \"fa0c2253-60c9-48a0-ab78-8a63f736b36f\") " Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.401340 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttq2c\" (UniqueName: \"kubernetes.io/projected/fa0c2253-60c9-48a0-ab78-8a63f736b36f-kube-api-access-ttq2c\") pod \"fa0c2253-60c9-48a0-ab78-8a63f736b36f\" (UID: \"fa0c2253-60c9-48a0-ab78-8a63f736b36f\") " Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.402194 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0c2253-60c9-48a0-ab78-8a63f736b36f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa0c2253-60c9-48a0-ab78-8a63f736b36f" (UID: "fa0c2253-60c9-48a0-ab78-8a63f736b36f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.408192 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0c2253-60c9-48a0-ab78-8a63f736b36f-kube-api-access-ttq2c" (OuterVolumeSpecName: "kube-api-access-ttq2c") pod "fa0c2253-60c9-48a0-ab78-8a63f736b36f" (UID: "fa0c2253-60c9-48a0-ab78-8a63f736b36f"). InnerVolumeSpecName "kube-api-access-ttq2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.419552 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2plvb" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.479470 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-r2v2z" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.484439 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e7b-account-create-update-nd78w" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.503233 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1318204d-e75b-4ba8-b804-298eded6f129-operator-scripts\") pod \"1318204d-e75b-4ba8-b804-298eded6f129\" (UID: \"1318204d-e75b-4ba8-b804-298eded6f129\") " Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.503282 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaf64ee3-3552-488c-9dff-15dc31783f22-operator-scripts\") pod \"eaf64ee3-3552-488c-9dff-15dc31783f22\" (UID: \"eaf64ee3-3552-488c-9dff-15dc31783f22\") " Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.503494 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a4b747-c5a6-41c0-86cf-895a5fff2470-operator-scripts\") pod \"75a4b747-c5a6-41c0-86cf-895a5fff2470\" (UID: \"75a4b747-c5a6-41c0-86cf-895a5fff2470\") " Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.503538 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2tmw\" (UniqueName: \"kubernetes.io/projected/eaf64ee3-3552-488c-9dff-15dc31783f22-kube-api-access-k2tmw\") pod \"eaf64ee3-3552-488c-9dff-15dc31783f22\" (UID: \"eaf64ee3-3552-488c-9dff-15dc31783f22\") " Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.503582 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdsbv\" (UniqueName: \"kubernetes.io/projected/75a4b747-c5a6-41c0-86cf-895a5fff2470-kube-api-access-cdsbv\") pod \"75a4b747-c5a6-41c0-86cf-895a5fff2470\" (UID: \"75a4b747-c5a6-41c0-86cf-895a5fff2470\") " Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.503716 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtlgk\" (UniqueName: \"kubernetes.io/projected/1318204d-e75b-4ba8-b804-298eded6f129-kube-api-access-gtlgk\") pod \"1318204d-e75b-4ba8-b804-298eded6f129\" (UID: \"1318204d-e75b-4ba8-b804-298eded6f129\") " Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.504152 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa0c2253-60c9-48a0-ab78-8a63f736b36f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.504176 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttq2c\" (UniqueName: \"kubernetes.io/projected/fa0c2253-60c9-48a0-ab78-8a63f736b36f-kube-api-access-ttq2c\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.505837 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1318204d-e75b-4ba8-b804-298eded6f129-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1318204d-e75b-4ba8-b804-298eded6f129" (UID: "1318204d-e75b-4ba8-b804-298eded6f129"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.505905 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaf64ee3-3552-488c-9dff-15dc31783f22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eaf64ee3-3552-488c-9dff-15dc31783f22" (UID: "eaf64ee3-3552-488c-9dff-15dc31783f22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.506492 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75a4b747-c5a6-41c0-86cf-895a5fff2470-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75a4b747-c5a6-41c0-86cf-895a5fff2470" (UID: "75a4b747-c5a6-41c0-86cf-895a5fff2470"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.508772 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1318204d-e75b-4ba8-b804-298eded6f129-kube-api-access-gtlgk" (OuterVolumeSpecName: "kube-api-access-gtlgk") pod "1318204d-e75b-4ba8-b804-298eded6f129" (UID: "1318204d-e75b-4ba8-b804-298eded6f129"). InnerVolumeSpecName "kube-api-access-gtlgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.509280 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a4b747-c5a6-41c0-86cf-895a5fff2470-kube-api-access-cdsbv" (OuterVolumeSpecName: "kube-api-access-cdsbv") pod "75a4b747-c5a6-41c0-86cf-895a5fff2470" (UID: "75a4b747-c5a6-41c0-86cf-895a5fff2470"). InnerVolumeSpecName "kube-api-access-cdsbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.515394 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf64ee3-3552-488c-9dff-15dc31783f22-kube-api-access-k2tmw" (OuterVolumeSpecName: "kube-api-access-k2tmw") pod "eaf64ee3-3552-488c-9dff-15dc31783f22" (UID: "eaf64ee3-3552-488c-9dff-15dc31783f22"). InnerVolumeSpecName "kube-api-access-k2tmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.524556 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0617-account-create-update-vwt74" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.609293 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl5jb\" (UniqueName: \"kubernetes.io/projected/3b4efdbb-8b77-4889-be40-c74e7eac7392-kube-api-access-fl5jb\") pod \"3b4efdbb-8b77-4889-be40-c74e7eac7392\" (UID: \"3b4efdbb-8b77-4889-be40-c74e7eac7392\") " Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.609654 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4efdbb-8b77-4889-be40-c74e7eac7392-operator-scripts\") pod \"3b4efdbb-8b77-4889-be40-c74e7eac7392\" (UID: \"3b4efdbb-8b77-4889-be40-c74e7eac7392\") " Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.610785 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b4efdbb-8b77-4889-be40-c74e7eac7392-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b4efdbb-8b77-4889-be40-c74e7eac7392" (UID: "3b4efdbb-8b77-4889-be40-c74e7eac7392"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.612043 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a4b747-c5a6-41c0-86cf-895a5fff2470-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.612070 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2tmw\" (UniqueName: \"kubernetes.io/projected/eaf64ee3-3552-488c-9dff-15dc31783f22-kube-api-access-k2tmw\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.612082 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdsbv\" (UniqueName: \"kubernetes.io/projected/75a4b747-c5a6-41c0-86cf-895a5fff2470-kube-api-access-cdsbv\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.612092 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtlgk\" (UniqueName: \"kubernetes.io/projected/1318204d-e75b-4ba8-b804-298eded6f129-kube-api-access-gtlgk\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.612102 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1318204d-e75b-4ba8-b804-298eded6f129-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.612112 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaf64ee3-3552-488c-9dff-15dc31783f22-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.612121 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4efdbb-8b77-4889-be40-c74e7eac7392-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.614281 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4efdbb-8b77-4889-be40-c74e7eac7392-kube-api-access-fl5jb" (OuterVolumeSpecName: "kube-api-access-fl5jb") pod "3b4efdbb-8b77-4889-be40-c74e7eac7392" (UID: "3b4efdbb-8b77-4889-be40-c74e7eac7392"). InnerVolumeSpecName "kube-api-access-fl5jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:40 crc kubenswrapper[4772]: I1128 11:24:40.714107 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl5jb\" (UniqueName: \"kubernetes.io/projected/3b4efdbb-8b77-4889-be40-c74e7eac7392-kube-api-access-fl5jb\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.223199 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e7b-account-create-update-nd78w" event={"ID":"eaf64ee3-3552-488c-9dff-15dc31783f22","Type":"ContainerDied","Data":"6613e45c85a1d01f2573d226a7e72d0e140c4ec91d8c46723d7cf33be4d9782a"} Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.223727 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6613e45c85a1d01f2573d226a7e72d0e140c4ec91d8c46723d7cf33be4d9782a" Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.223864 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e7b-account-create-update-nd78w" Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.236760 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-86jrn" event={"ID":"806ae6be-af8d-492c-8556-797040276b12","Type":"ContainerStarted","Data":"3b04f2c7fd2ed503d91c08f1c1cd318ce9b8dcbedb98723a1e067fa8c386b690"} Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.245671 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-r2v2z" Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.246354 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-r2v2z" event={"ID":"1318204d-e75b-4ba8-b804-298eded6f129","Type":"ContainerDied","Data":"968b819876ade473bb3822836534b9dfd0966a2fa6165819a5ebb18627c5a7d1"} Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.246427 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="968b819876ade473bb3822836534b9dfd0966a2fa6165819a5ebb18627c5a7d1" Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.250495 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qnrnt" Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.250529 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qnrnt" event={"ID":"fa0c2253-60c9-48a0-ab78-8a63f736b36f","Type":"ContainerDied","Data":"0ecabb54b2ab490ff510b555dfce1f7501843ea272191c87ec0877517630b0a2"} Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.250598 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ecabb54b2ab490ff510b555dfce1f7501843ea272191c87ec0877517630b0a2" Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.253191 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2plvb" event={"ID":"75a4b747-c5a6-41c0-86cf-895a5fff2470","Type":"ContainerDied","Data":"67f811002dcf9a8eec90146bbaaf16da41c9843790f7d4711687110246cc1651"} Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.253251 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67f811002dcf9a8eec90146bbaaf16da41c9843790f7d4711687110246cc1651" Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.253474 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2plvb" Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.275937 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0617-account-create-update-vwt74" event={"ID":"3b4efdbb-8b77-4889-be40-c74e7eac7392","Type":"ContainerDied","Data":"4a61277c9dbda8a321a45224efa44efab371ea5ebf06ab2be428ea4f1655e17a"} Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.276006 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a61277c9dbda8a321a45224efa44efab371ea5ebf06ab2be428ea4f1655e17a" Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.276128 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0617-account-create-update-vwt74" Nov 28 11:24:41 crc kubenswrapper[4772]: I1128 11:24:41.277597 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-86jrn" podStartSLOduration=2.0823241279999998 podStartE2EDuration="7.277564052s" podCreationTimestamp="2025-11-28 11:24:34 +0000 UTC" firstStartedPulling="2025-11-28 11:24:35.087904492 +0000 UTC m=+1073.411147719" lastFinishedPulling="2025-11-28 11:24:40.283144416 +0000 UTC m=+1078.606387643" observedRunningTime="2025-11-28 11:24:41.269113422 +0000 UTC m=+1079.592356719" watchObservedRunningTime="2025-11-28 11:24:41.277564052 +0000 UTC m=+1079.600807309" Nov 28 11:24:44 crc kubenswrapper[4772]: I1128 11:24:44.340308 4772 generic.go:334] "Generic (PLEG): container finished" podID="806ae6be-af8d-492c-8556-797040276b12" containerID="3b04f2c7fd2ed503d91c08f1c1cd318ce9b8dcbedb98723a1e067fa8c386b690" exitCode=0 Nov 28 11:24:44 crc kubenswrapper[4772]: I1128 11:24:44.340382 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-86jrn" event={"ID":"806ae6be-af8d-492c-8556-797040276b12","Type":"ContainerDied","Data":"3b04f2c7fd2ed503d91c08f1c1cd318ce9b8dcbedb98723a1e067fa8c386b690"} Nov 28 11:24:45 crc kubenswrapper[4772]: I1128 11:24:45.755586 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:45 crc kubenswrapper[4772]: I1128 11:24:45.817306 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqv7h\" (UniqueName: \"kubernetes.io/projected/806ae6be-af8d-492c-8556-797040276b12-kube-api-access-qqv7h\") pod \"806ae6be-af8d-492c-8556-797040276b12\" (UID: \"806ae6be-af8d-492c-8556-797040276b12\") " Nov 28 11:24:45 crc kubenswrapper[4772]: I1128 11:24:45.817455 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806ae6be-af8d-492c-8556-797040276b12-config-data\") pod \"806ae6be-af8d-492c-8556-797040276b12\" (UID: \"806ae6be-af8d-492c-8556-797040276b12\") " Nov 28 11:24:45 crc kubenswrapper[4772]: I1128 11:24:45.817547 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806ae6be-af8d-492c-8556-797040276b12-combined-ca-bundle\") pod \"806ae6be-af8d-492c-8556-797040276b12\" (UID: \"806ae6be-af8d-492c-8556-797040276b12\") " Nov 28 11:24:45 crc kubenswrapper[4772]: I1128 11:24:45.828155 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806ae6be-af8d-492c-8556-797040276b12-kube-api-access-qqv7h" (OuterVolumeSpecName: "kube-api-access-qqv7h") pod "806ae6be-af8d-492c-8556-797040276b12" (UID: "806ae6be-af8d-492c-8556-797040276b12"). InnerVolumeSpecName "kube-api-access-qqv7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:45 crc kubenswrapper[4772]: I1128 11:24:45.847050 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806ae6be-af8d-492c-8556-797040276b12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "806ae6be-af8d-492c-8556-797040276b12" (UID: "806ae6be-af8d-492c-8556-797040276b12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:24:45 crc kubenswrapper[4772]: I1128 11:24:45.885813 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806ae6be-af8d-492c-8556-797040276b12-config-data" (OuterVolumeSpecName: "config-data") pod "806ae6be-af8d-492c-8556-797040276b12" (UID: "806ae6be-af8d-492c-8556-797040276b12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:24:45 crc kubenswrapper[4772]: I1128 11:24:45.919889 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqv7h\" (UniqueName: \"kubernetes.io/projected/806ae6be-af8d-492c-8556-797040276b12-kube-api-access-qqv7h\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:45 crc kubenswrapper[4772]: I1128 11:24:45.919930 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806ae6be-af8d-492c-8556-797040276b12-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:45 crc kubenswrapper[4772]: I1128 11:24:45.919941 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806ae6be-af8d-492c-8556-797040276b12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.371089 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-86jrn" event={"ID":"806ae6be-af8d-492c-8556-797040276b12","Type":"ContainerDied","Data":"6cea3f434f711fbfb411199b54479c76841fe6b35f057494cb9349d3e5138e12"} Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.371143 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cea3f434f711fbfb411199b54479c76841fe6b35f057494cb9349d3e5138e12" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.371662 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-86jrn" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.768374 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7xjd9"] Nov 28 11:24:46 crc kubenswrapper[4772]: E1128 11:24:46.769227 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806ae6be-af8d-492c-8556-797040276b12" containerName="keystone-db-sync" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.769245 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="806ae6be-af8d-492c-8556-797040276b12" containerName="keystone-db-sync" Nov 28 11:24:46 crc kubenswrapper[4772]: E1128 11:24:46.769257 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a4b747-c5a6-41c0-86cf-895a5fff2470" containerName="mariadb-database-create" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.769263 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a4b747-c5a6-41c0-86cf-895a5fff2470" containerName="mariadb-database-create" Nov 28 11:24:46 crc kubenswrapper[4772]: E1128 11:24:46.769283 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4efdbb-8b77-4889-be40-c74e7eac7392" containerName="mariadb-account-create-update" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.769292 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4efdbb-8b77-4889-be40-c74e7eac7392" containerName="mariadb-account-create-update" Nov 28 11:24:46 crc kubenswrapper[4772]: E1128 11:24:46.769305 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf64ee3-3552-488c-9dff-15dc31783f22" containerName="mariadb-account-create-update" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.769310 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf64ee3-3552-488c-9dff-15dc31783f22" containerName="mariadb-account-create-update" Nov 28 11:24:46 crc kubenswrapper[4772]: E1128 11:24:46.769321 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b99c928-e317-4266-a09f-5e3a2d0eb4b5" containerName="mariadb-account-create-update" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.769328 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b99c928-e317-4266-a09f-5e3a2d0eb4b5" containerName="mariadb-account-create-update" Nov 28 11:24:46 crc kubenswrapper[4772]: E1128 11:24:46.769339 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" containerName="dnsmasq-dns" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.769345 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" containerName="dnsmasq-dns" Nov 28 11:24:46 crc kubenswrapper[4772]: E1128 11:24:46.778463 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" containerName="init" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.778509 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" containerName="init" Nov 28 11:24:46 crc kubenswrapper[4772]: E1128 11:24:46.778529 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1318204d-e75b-4ba8-b804-298eded6f129" containerName="mariadb-database-create" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.778536 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1318204d-e75b-4ba8-b804-298eded6f129" containerName="mariadb-database-create" Nov 28 11:24:46 crc kubenswrapper[4772]: E1128 11:24:46.778558 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0c2253-60c9-48a0-ab78-8a63f736b36f" containerName="mariadb-database-create" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.778567 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0c2253-60c9-48a0-ab78-8a63f736b36f" containerName="mariadb-database-create" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.778936 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b99c928-e317-4266-a09f-5e3a2d0eb4b5" containerName="mariadb-account-create-update" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.778955 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4efdbb-8b77-4889-be40-c74e7eac7392" containerName="mariadb-account-create-update" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.778967 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a4b747-c5a6-41c0-86cf-895a5fff2470" containerName="mariadb-database-create" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.778979 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae7cb7c-0b89-4e5c-b8d5-50631df88ac4" containerName="dnsmasq-dns" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.778989 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1318204d-e75b-4ba8-b804-298eded6f129" containerName="mariadb-database-create" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.779004 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf64ee3-3552-488c-9dff-15dc31783f22" containerName="mariadb-account-create-update" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.779016 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="806ae6be-af8d-492c-8556-797040276b12" containerName="keystone-db-sync" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.779030 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa0c2253-60c9-48a0-ab78-8a63f736b36f" containerName="mariadb-database-create" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.779913 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.783760 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.784141 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.784239 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4j4h4" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.790450 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.790641 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.818704 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7xjd9"] Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.834994 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-fernet-keys\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.835059 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq6qp\" (UniqueName: \"kubernetes.io/projected/a89f9035-6c39-41ce-b7e4-31df46e662ce-kube-api-access-sq6qp\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.835101 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-combined-ca-bundle\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.835137 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-config-data\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.835167 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-credential-keys\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.835194 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-scripts\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.843583 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-bw59f"] Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.850376 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.952188 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq6qp\" (UniqueName: \"kubernetes.io/projected/a89f9035-6c39-41ce-b7e4-31df46e662ce-kube-api-access-sq6qp\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.952287 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-combined-ca-bundle\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.952338 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-config-data\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.952396 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-credential-keys\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.952434 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-scripts\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.952480 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-fernet-keys\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:46 crc kubenswrapper[4772]: I1128 11:24:46.960793 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-credential-keys\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.001317 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-bw59f"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.016642 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-scripts\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.030714 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-combined-ca-bundle\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.032994 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-config-data\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.051685 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq6qp\" (UniqueName: \"kubernetes.io/projected/a89f9035-6c39-41ce-b7e4-31df46e662ce-kube-api-access-sq6qp\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.051913 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-fernet-keys\") pod \"keystone-bootstrap-7xjd9\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.059353 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.059592 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm8sb\" (UniqueName: \"kubernetes.io/projected/f47c334e-2b64-403b-a682-0cb12b2c60d3-kube-api-access-jm8sb\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.059758 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.059806 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.059917 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-config\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.060118 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.142450 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-72xt2"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.143870 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.151864 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.159802 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.160169 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-swb64" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.161822 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.161868 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.161924 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm8sb\" (UniqueName: \"kubernetes.io/projected/f47c334e-2b64-403b-a682-0cb12b2c60d3-kube-api-access-jm8sb\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.161996 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.162018 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.162099 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-config\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.163188 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-config\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.166271 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.166986 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.167558 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.167590 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.176083 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.185062 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-72xt2"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.201538 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75774664c5-v7rms"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.203273 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.207850 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.208198 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jpv2r" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.208497 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.208629 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.230571 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm8sb\" (UniqueName: \"kubernetes.io/projected/f47c334e-2b64-403b-a682-0cb12b2c60d3-kube-api-access-jm8sb\") pod \"dnsmasq-dns-bbf5cc879-bw59f\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.231882 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75774664c5-v7rms"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.240044 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-gkxww"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.249757 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.250007 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gkxww" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.254379 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.267340 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-combined-ca-bundle\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.268511 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-config-data\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.268546 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-scripts\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.268600 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-etc-machine-id\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.268646 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwclr\" (UniqueName: \"kubernetes.io/projected/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-kube-api-access-gwclr\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.268675 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-db-sync-config-data\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.270374 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d5j4l" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.270849 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.290472 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-krcf2"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.291930 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.295663 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zw9g7" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.300909 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.300909 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.316847 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gkxww"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.360203 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-krcf2"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.373902 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rblm\" (UniqueName: \"kubernetes.io/projected/d7a16df0-9676-4d80-adc5-305fe795deb7-kube-api-access-4rblm\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.373967 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31c75297-c867-4c84-8183-239f47947895-config\") pod \"neutron-db-sync-gkxww\" (UID: \"31c75297-c867-4c84-8183-239f47947895\") " pod="openstack/neutron-db-sync-gkxww" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.374022 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c75297-c867-4c84-8183-239f47947895-combined-ca-bundle\") pod \"neutron-db-sync-gkxww\" (UID: \"31c75297-c867-4c84-8183-239f47947895\") " pod="openstack/neutron-db-sync-gkxww" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.374069 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-combined-ca-bundle\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.374120 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-config-data\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.374152 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-scripts\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.374229 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7p2\" (UniqueName: \"kubernetes.io/projected/31c75297-c867-4c84-8183-239f47947895-kube-api-access-wl7p2\") pod \"neutron-db-sync-gkxww\" (UID: \"31c75297-c867-4c84-8183-239f47947895\") " pod="openstack/neutron-db-sync-gkxww" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.374253 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-etc-machine-id\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.374294 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7a16df0-9676-4d80-adc5-305fe795deb7-scripts\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.374323 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7a16df0-9676-4d80-adc5-305fe795deb7-logs\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.374374 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwclr\" (UniqueName: \"kubernetes.io/projected/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-kube-api-access-gwclr\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.374410 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-db-sync-config-data\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.374435 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7a16df0-9676-4d80-adc5-305fe795deb7-horizon-secret-key\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.374540 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7a16df0-9676-4d80-adc5-305fe795deb7-config-data\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.388201 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-combined-ca-bundle\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.389712 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-etc-machine-id\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.394302 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-db-sync-config-data\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.410977 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-scripts\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.414573 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-config-data\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.420489 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwclr\" (UniqueName: \"kubernetes.io/projected/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-kube-api-access-gwclr\") pod \"cinder-db-sync-72xt2\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.428440 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-bw59f"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476304 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7a16df0-9676-4d80-adc5-305fe795deb7-horizon-secret-key\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476415 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7a16df0-9676-4d80-adc5-305fe795deb7-config-data\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476456 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rblm\" (UniqueName: \"kubernetes.io/projected/d7a16df0-9676-4d80-adc5-305fe795deb7-kube-api-access-4rblm\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476491 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw2gr\" (UniqueName: \"kubernetes.io/projected/3d1f86f7-529a-4ed1-885d-1beb4c14b213-kube-api-access-jw2gr\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476516 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31c75297-c867-4c84-8183-239f47947895-config\") pod \"neutron-db-sync-gkxww\" (UID: \"31c75297-c867-4c84-8183-239f47947895\") " pod="openstack/neutron-db-sync-gkxww" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476546 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c75297-c867-4c84-8183-239f47947895-combined-ca-bundle\") pod \"neutron-db-sync-gkxww\" (UID: \"31c75297-c867-4c84-8183-239f47947895\") " pod="openstack/neutron-db-sync-gkxww" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476580 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-scripts\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476615 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-config-data\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476631 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-combined-ca-bundle\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476655 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d1f86f7-529a-4ed1-885d-1beb4c14b213-logs\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476678 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl7p2\" (UniqueName: \"kubernetes.io/projected/31c75297-c867-4c84-8183-239f47947895-kube-api-access-wl7p2\") pod \"neutron-db-sync-gkxww\" (UID: \"31c75297-c867-4c84-8183-239f47947895\") " pod="openstack/neutron-db-sync-gkxww" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476707 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7a16df0-9676-4d80-adc5-305fe795deb7-scripts\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.476727 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7a16df0-9676-4d80-adc5-305fe795deb7-logs\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.477434 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7a16df0-9676-4d80-adc5-305fe795deb7-logs\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.479031 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7a16df0-9676-4d80-adc5-305fe795deb7-config-data\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.480494 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7a16df0-9676-4d80-adc5-305fe795deb7-horizon-secret-key\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.482051 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7a16df0-9676-4d80-adc5-305fe795deb7-scripts\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.482401 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.483736 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/31c75297-c867-4c84-8183-239f47947895-config\") pod \"neutron-db-sync-gkxww\" (UID: \"31c75297-c867-4c84-8183-239f47947895\") " pod="openstack/neutron-db-sync-gkxww" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.485919 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.491258 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c75297-c867-4c84-8183-239f47947895-combined-ca-bundle\") pod \"neutron-db-sync-gkxww\" (UID: \"31c75297-c867-4c84-8183-239f47947895\") " pod="openstack/neutron-db-sync-gkxww" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.496535 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.496956 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.506975 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl7p2\" (UniqueName: \"kubernetes.io/projected/31c75297-c867-4c84-8183-239f47947895-kube-api-access-wl7p2\") pod \"neutron-db-sync-gkxww\" (UID: \"31c75297-c867-4c84-8183-239f47947895\") " pod="openstack/neutron-db-sync-gkxww" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.507129 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-j9ck9"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.508523 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.520270 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.520557 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-54jnx" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.537549 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rblm\" (UniqueName: \"kubernetes.io/projected/d7a16df0-9676-4d80-adc5-305fe795deb7-kube-api-access-4rblm\") pod \"horizon-75774664c5-v7rms\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.552958 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.566287 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-j9ck9"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.571888 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-72xt2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.579481 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-combined-ca-bundle\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.579522 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-config-data\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.579545 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d1f86f7-529a-4ed1-885d-1beb4c14b213-logs\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.579578 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-config-data\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.579651 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8f47327-6cc4-4ca2-9363-0eca9d129686-run-httpd\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.579691 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.579724 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.579793 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw2gr\" (UniqueName: \"kubernetes.io/projected/3d1f86f7-529a-4ed1-885d-1beb4c14b213-kube-api-access-jw2gr\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.579817 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-scripts\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.579858 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8f47327-6cc4-4ca2-9363-0eca9d129686-log-httpd\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.579884 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-scripts\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.579906 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgw9c\" (UniqueName: \"kubernetes.io/projected/a8f47327-6cc4-4ca2-9363-0eca9d129686-kube-api-access-mgw9c\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.584685 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d1f86f7-529a-4ed1-885d-1beb4c14b213-logs\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.587271 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-combined-ca-bundle\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.590555 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-scripts\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.596122 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-config-data\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.599992 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw2gr\" (UniqueName: \"kubernetes.io/projected/3d1f86f7-529a-4ed1-885d-1beb4c14b213-kube-api-access-jw2gr\") pod \"placement-db-sync-krcf2\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.625078 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.626462 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wnq4f"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.628102 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.656639 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gkxww" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.673769 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-krcf2" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.682654 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e2ef4f-2f84-4b24-9a30-27fea471faf5-combined-ca-bundle\") pod \"barbican-db-sync-j9ck9\" (UID: \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\") " pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.682713 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-scripts\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.682764 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shjcd\" (UniqueName: \"kubernetes.io/projected/59e2ef4f-2f84-4b24-9a30-27fea471faf5-kube-api-access-shjcd\") pod \"barbican-db-sync-j9ck9\" (UID: \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\") " pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.682791 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8f47327-6cc4-4ca2-9363-0eca9d129686-log-httpd\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.682816 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgw9c\" (UniqueName: \"kubernetes.io/projected/a8f47327-6cc4-4ca2-9363-0eca9d129686-kube-api-access-mgw9c\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.682848 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59e2ef4f-2f84-4b24-9a30-27fea471faf5-db-sync-config-data\") pod \"barbican-db-sync-j9ck9\" (UID: \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\") " pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.682878 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-config-data\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.682964 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8f47327-6cc4-4ca2-9363-0eca9d129686-run-httpd\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.682994 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.683020 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.683765 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8f47327-6cc4-4ca2-9363-0eca9d129686-run-httpd\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.685895 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8f47327-6cc4-4ca2-9363-0eca9d129686-log-httpd\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.686893 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wnq4f"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.693575 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-scripts\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.694680 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.695894 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.695948 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-config-data\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.696863 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.702847 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.706180 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vv5mt" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.707276 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.707691 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.708218 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.716649 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgw9c\" (UniqueName: \"kubernetes.io/projected/a8f47327-6cc4-4ca2-9363-0eca9d129686-kube-api-access-mgw9c\") pod \"ceilometer-0\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.726891 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.756813 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6fb59ccf89-j6w9l"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.761334 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.772521 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6fb59ccf89-j6w9l"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.796467 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.809148 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-config\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.826527 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shjcd\" (UniqueName: \"kubernetes.io/projected/59e2ef4f-2f84-4b24-9a30-27fea471faf5-kube-api-access-shjcd\") pod \"barbican-db-sync-j9ck9\" (UID: \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\") " pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.826726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59e2ef4f-2f84-4b24-9a30-27fea471faf5-db-sync-config-data\") pod \"barbican-db-sync-j9ck9\" (UID: \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\") " pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.827402 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.827710 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.828831 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.842237 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.842869 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f9lf\" (UniqueName: \"kubernetes.io/projected/0283d234-d683-44b1-8e41-71fef0c61b16-kube-api-access-5f9lf\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.849005 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e2ef4f-2f84-4b24-9a30-27fea471faf5-combined-ca-bundle\") pod \"barbican-db-sync-j9ck9\" (UID: \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\") " pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.862199 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.862406 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.870895 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.873842 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.874939 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.877458 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59e2ef4f-2f84-4b24-9a30-27fea471faf5-db-sync-config-data\") pod \"barbican-db-sync-j9ck9\" (UID: \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\") " pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.888162 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shjcd\" (UniqueName: \"kubernetes.io/projected/59e2ef4f-2f84-4b24-9a30-27fea471faf5-kube-api-access-shjcd\") pod \"barbican-db-sync-j9ck9\" (UID: \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\") " pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.896117 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e2ef4f-2f84-4b24-9a30-27fea471faf5-combined-ca-bundle\") pod \"barbican-db-sync-j9ck9\" (UID: \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\") " pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954051 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czs2h\" (UniqueName: \"kubernetes.io/projected/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-kube-api-access-czs2h\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954105 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c946f8ec-d68e-4217-9076-b5746c9c8439-config-data\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954129 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c946f8ec-d68e-4217-9076-b5746c9c8439-logs\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954144 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c946f8ec-d68e-4217-9076-b5746c9c8439-scripts\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954165 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954271 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954330 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954410 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954433 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-config-data\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954502 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954538 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f9lf\" (UniqueName: \"kubernetes.io/projected/0283d234-d683-44b1-8e41-71fef0c61b16-kube-api-access-5f9lf\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954565 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954587 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954604 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954629 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-logs\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954658 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954682 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a9a8a44-19de-4233-acc9-a73d8aecea2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954707 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-config\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954726 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9a8a44-19de-4233-acc9-a73d8aecea2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954747 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-scripts\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954780 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c946f8ec-d68e-4217-9076-b5746c9c8439-horizon-secret-key\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954801 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954824 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954840 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49kfv\" (UniqueName: \"kubernetes.io/projected/7a9a8a44-19de-4233-acc9-a73d8aecea2a-kube-api-access-49kfv\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954857 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lltxz\" (UniqueName: \"kubernetes.io/projected/c946f8ec-d68e-4217-9076-b5746c9c8439-kube-api-access-lltxz\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954884 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.954901 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.960880 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.965160 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.968004 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.972030 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.984727 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-config\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:47 crc kubenswrapper[4772]: I1128 11:24:47.998574 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f9lf\" (UniqueName: \"kubernetes.io/projected/0283d234-d683-44b1-8e41-71fef0c61b16-kube-api-access-5f9lf\") pod \"dnsmasq-dns-56df8fb6b7-wnq4f\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.061055 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.061400 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.061493 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.061588 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-logs\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.061696 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.071831 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a9a8a44-19de-4233-acc9-a73d8aecea2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.068697 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.072493 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9a8a44-19de-4233-acc9-a73d8aecea2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.068926 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-logs\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.072612 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-scripts\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.073281 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c946f8ec-d68e-4217-9076-b5746c9c8439-horizon-secret-key\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.074562 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.074730 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.074837 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49kfv\" (UniqueName: \"kubernetes.io/projected/7a9a8a44-19de-4233-acc9-a73d8aecea2a-kube-api-access-49kfv\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.074935 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lltxz\" (UniqueName: \"kubernetes.io/projected/c946f8ec-d68e-4217-9076-b5746c9c8439-kube-api-access-lltxz\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.075127 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czs2h\" (UniqueName: \"kubernetes.io/projected/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-kube-api-access-czs2h\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.075334 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c946f8ec-d68e-4217-9076-b5746c9c8439-config-data\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.075472 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c946f8ec-d68e-4217-9076-b5746c9c8439-logs\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.075562 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c946f8ec-d68e-4217-9076-b5746c9c8439-scripts\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.075661 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.075785 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.075945 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-config-data\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.076988 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c946f8ec-d68e-4217-9076-b5746c9c8439-logs\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.076189 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9a8a44-19de-4233-acc9-a73d8aecea2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.074152 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.076776 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.076043 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a9a8a44-19de-4233-acc9-a73d8aecea2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.077336 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.079455 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c946f8ec-d68e-4217-9076-b5746c9c8439-config-data\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.079656 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.080289 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c946f8ec-d68e-4217-9076-b5746c9c8439-scripts\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.081935 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-bw59f"] Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.082503 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c946f8ec-d68e-4217-9076-b5746c9c8439-horizon-secret-key\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.084201 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.084545 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-scripts\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.109261 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7xjd9"] Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.117054 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.130585 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-config-data\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.131248 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.136331 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.139389 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49kfv\" (UniqueName: \"kubernetes.io/projected/7a9a8a44-19de-4233-acc9-a73d8aecea2a-kube-api-access-49kfv\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.140924 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.141588 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czs2h\" (UniqueName: \"kubernetes.io/projected/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-kube-api-access-czs2h\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.143079 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lltxz\" (UniqueName: \"kubernetes.io/projected/c946f8ec-d68e-4217-9076-b5746c9c8439-kube-api-access-lltxz\") pod \"horizon-6fb59ccf89-j6w9l\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.152523 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.186533 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.196611 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.196738 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.270078 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.353029 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.373160 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-72xt2"] Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.394972 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-krcf2"] Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.405639 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.407307 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75774664c5-v7rms"] Nov 28 11:24:48 crc kubenswrapper[4772]: W1128 11:24:48.420071 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7a16df0_9676_4d80_adc5_305fe795deb7.slice/crio-707f3902e9585e0e9fc75d4a0d7bfaed13fec2a062a647d066ac7b28079c2247 WatchSource:0}: Error finding container 707f3902e9585e0e9fc75d4a0d7bfaed13fec2a062a647d066ac7b28079c2247: Status 404 returned error can't find the container with id 707f3902e9585e0e9fc75d4a0d7bfaed13fec2a062a647d066ac7b28079c2247 Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.437116 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gkxww"] Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.473256 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7xjd9" event={"ID":"a89f9035-6c39-41ce-b7e4-31df46e662ce","Type":"ContainerStarted","Data":"d5f3945f503b971fd8ae0ca145854ade3143c93bc7bda34205b5006563325a2d"} Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.477545 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" event={"ID":"f47c334e-2b64-403b-a682-0cb12b2c60d3","Type":"ContainerStarted","Data":"8ab1b5df2066894ed0fbec9767a763a94561eb5715deaaafaddbf41dd241d25b"} Nov 28 11:24:48 crc kubenswrapper[4772]: W1128 11:24:48.492322 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31c75297_c867_4c84_8183_239f47947895.slice/crio-03b7eea2f7440f8d4c28bde453734fbeb1b662d2be5459a77015ccc564208598 WatchSource:0}: Error finding container 03b7eea2f7440f8d4c28bde453734fbeb1b662d2be5459a77015ccc564208598: Status 404 returned error can't find the container with id 03b7eea2f7440f8d4c28bde453734fbeb1b662d2be5459a77015ccc564208598 Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.520823 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:24:48 crc kubenswrapper[4772]: W1128 11:24:48.547866 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8f47327_6cc4_4ca2_9363_0eca9d129686.slice/crio-c118db470904123053baba4b76c5f4d3c0c4eb18f72de50346c9ae99cfd0d670 WatchSource:0}: Error finding container c118db470904123053baba4b76c5f4d3c0c4eb18f72de50346c9ae99cfd0d670: Status 404 returned error can't find the container with id c118db470904123053baba4b76c5f4d3c0c4eb18f72de50346c9ae99cfd0d670 Nov 28 11:24:48 crc kubenswrapper[4772]: I1128 11:24:48.989099 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-j9ck9"] Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.272863 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wnq4f"] Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.282905 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.409467 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.515069 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75774664c5-v7rms" event={"ID":"d7a16df0-9676-4d80-adc5-305fe795deb7","Type":"ContainerStarted","Data":"707f3902e9585e0e9fc75d4a0d7bfaed13fec2a062a647d066ac7b28079c2247"} Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.517367 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-72xt2" event={"ID":"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe","Type":"ContainerStarted","Data":"2f84de32a7ec32e70c44df724483c6b8911aca5556c34e4e59c152f25f8d989e"} Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.519094 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j9ck9" event={"ID":"59e2ef4f-2f84-4b24-9a30-27fea471faf5","Type":"ContainerStarted","Data":"c1ce997143cb9ab18be0d175f91ac4cf6846976f0856926ba6befc0758f9aaa7"} Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.525129 4772 generic.go:334] "Generic (PLEG): container finished" podID="f47c334e-2b64-403b-a682-0cb12b2c60d3" containerID="f1b5041f91f595c84e64e08db3ae2d13ac736e1fea9c8960b4344bbf9a7ce0c8" exitCode=0 Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.525269 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" event={"ID":"f47c334e-2b64-403b-a682-0cb12b2c60d3","Type":"ContainerDied","Data":"f1b5041f91f595c84e64e08db3ae2d13ac736e1fea9c8960b4344bbf9a7ce0c8"} Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.531906 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" event={"ID":"0283d234-d683-44b1-8e41-71fef0c61b16","Type":"ContainerStarted","Data":"a3a58bfa36d2827b9e458ca8b8984a2966d400c63ae340a2cd9529be4a444124"} Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.534681 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7a9a8a44-19de-4233-acc9-a73d8aecea2a","Type":"ContainerStarted","Data":"8ec042bf9195f73d5af2e76e6258960c21f8a8e2d8a28384e0e9d6bf6df552c5"} Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.562442 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6fb59ccf89-j6w9l"] Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.613083 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7xjd9" event={"ID":"a89f9035-6c39-41ce-b7e4-31df46e662ce","Type":"ContainerStarted","Data":"76ffd2b9c18295f3f278a4a8d1b93bbca59982d3fdf63509e8d4bde76968d8c0"} Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.627089 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8f47327-6cc4-4ca2-9363-0eca9d129686","Type":"ContainerStarted","Data":"c118db470904123053baba4b76c5f4d3c0c4eb18f72de50346c9ae99cfd0d670"} Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.646154 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7xjd9" podStartSLOduration=3.646132883 podStartE2EDuration="3.646132883s" podCreationTimestamp="2025-11-28 11:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:24:49.644694214 +0000 UTC m=+1087.967937441" watchObservedRunningTime="2025-11-28 11:24:49.646132883 +0000 UTC m=+1087.969376110" Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.671273 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gkxww" event={"ID":"31c75297-c867-4c84-8183-239f47947895","Type":"ContainerStarted","Data":"84264bb416fd01c01bb7e11ef9f7e9b973a10f012c5a264cf6c2b9a4fdae3df5"} Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.671334 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gkxww" event={"ID":"31c75297-c867-4c84-8183-239f47947895","Type":"ContainerStarted","Data":"03b7eea2f7440f8d4c28bde453734fbeb1b662d2be5459a77015ccc564208598"} Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.673063 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75774664c5-v7rms"] Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.688738 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-krcf2" event={"ID":"3d1f86f7-529a-4ed1-885d-1beb4c14b213","Type":"ContainerStarted","Data":"1014aa2e197e4599bc5889bf3db8b5d4329149e25cca5ca127c9f20cd5f35f4d"} Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.748881 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.808317 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.812715 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-gkxww" podStartSLOduration=2.812685298 podStartE2EDuration="2.812685298s" podCreationTimestamp="2025-11-28 11:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:24:49.720712744 +0000 UTC m=+1088.043955981" watchObservedRunningTime="2025-11-28 11:24:49.812685298 +0000 UTC m=+1088.135928525" Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.860003 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f9585d4fc-mmppw"] Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.862042 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.892330 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f9585d4fc-mmppw"] Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.904482 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.972084 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzws7\" (UniqueName: \"kubernetes.io/projected/9b59c268-e1dd-41c3-bfbb-945870d0df3c-kube-api-access-kzws7\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.972156 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b59c268-e1dd-41c3-bfbb-945870d0df3c-config-data\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.972204 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b59c268-e1dd-41c3-bfbb-945870d0df3c-logs\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.972261 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b59c268-e1dd-41c3-bfbb-945870d0df3c-scripts\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:49 crc kubenswrapper[4772]: I1128 11:24:49.972287 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b59c268-e1dd-41c3-bfbb-945870d0df3c-horizon-secret-key\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.087039 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b59c268-e1dd-41c3-bfbb-945870d0df3c-scripts\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.087121 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b59c268-e1dd-41c3-bfbb-945870d0df3c-horizon-secret-key\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.088307 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b59c268-e1dd-41c3-bfbb-945870d0df3c-scripts\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.089305 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzws7\" (UniqueName: \"kubernetes.io/projected/9b59c268-e1dd-41c3-bfbb-945870d0df3c-kube-api-access-kzws7\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.089395 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b59c268-e1dd-41c3-bfbb-945870d0df3c-config-data\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.089494 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b59c268-e1dd-41c3-bfbb-945870d0df3c-logs\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.090329 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b59c268-e1dd-41c3-bfbb-945870d0df3c-logs\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.092093 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b59c268-e1dd-41c3-bfbb-945870d0df3c-config-data\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.109551 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzws7\" (UniqueName: \"kubernetes.io/projected/9b59c268-e1dd-41c3-bfbb-945870d0df3c-kube-api-access-kzws7\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.113927 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b59c268-e1dd-41c3-bfbb-945870d0df3c-horizon-secret-key\") pod \"horizon-f9585d4fc-mmppw\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.195297 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.256513 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.292372 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-dns-svc\") pod \"f47c334e-2b64-403b-a682-0cb12b2c60d3\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.292448 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-ovsdbserver-sb\") pod \"f47c334e-2b64-403b-a682-0cb12b2c60d3\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.292480 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-ovsdbserver-nb\") pod \"f47c334e-2b64-403b-a682-0cb12b2c60d3\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.292500 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-config\") pod \"f47c334e-2b64-403b-a682-0cb12b2c60d3\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.292554 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm8sb\" (UniqueName: \"kubernetes.io/projected/f47c334e-2b64-403b-a682-0cb12b2c60d3-kube-api-access-jm8sb\") pod \"f47c334e-2b64-403b-a682-0cb12b2c60d3\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.292718 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-dns-swift-storage-0\") pod \"f47c334e-2b64-403b-a682-0cb12b2c60d3\" (UID: \"f47c334e-2b64-403b-a682-0cb12b2c60d3\") " Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.307537 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47c334e-2b64-403b-a682-0cb12b2c60d3-kube-api-access-jm8sb" (OuterVolumeSpecName: "kube-api-access-jm8sb") pod "f47c334e-2b64-403b-a682-0cb12b2c60d3" (UID: "f47c334e-2b64-403b-a682-0cb12b2c60d3"). InnerVolumeSpecName "kube-api-access-jm8sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.331925 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-config" (OuterVolumeSpecName: "config") pod "f47c334e-2b64-403b-a682-0cb12b2c60d3" (UID: "f47c334e-2b64-403b-a682-0cb12b2c60d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.346312 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f47c334e-2b64-403b-a682-0cb12b2c60d3" (UID: "f47c334e-2b64-403b-a682-0cb12b2c60d3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.361870 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f47c334e-2b64-403b-a682-0cb12b2c60d3" (UID: "f47c334e-2b64-403b-a682-0cb12b2c60d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.362898 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f47c334e-2b64-403b-a682-0cb12b2c60d3" (UID: "f47c334e-2b64-403b-a682-0cb12b2c60d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.396057 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.396112 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.396144 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm8sb\" (UniqueName: \"kubernetes.io/projected/f47c334e-2b64-403b-a682-0cb12b2c60d3-kube-api-access-jm8sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.396159 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.396172 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.406533 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f47c334e-2b64-403b-a682-0cb12b2c60d3" (UID: "f47c334e-2b64-403b-a682-0cb12b2c60d3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.499090 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47c334e-2b64-403b-a682-0cb12b2c60d3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.713839 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" event={"ID":"f47c334e-2b64-403b-a682-0cb12b2c60d3","Type":"ContainerDied","Data":"8ab1b5df2066894ed0fbec9767a763a94561eb5715deaaafaddbf41dd241d25b"} Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.714305 4772 scope.go:117] "RemoveContainer" containerID="f1b5041f91f595c84e64e08db3ae2d13ac736e1fea9c8960b4344bbf9a7ce0c8" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.714141 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-bw59f" Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.739046 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f9585d4fc-mmppw"] Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.742812 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"108f92bc-b212-4233-ad3c-2ce3c1d0cc99","Type":"ContainerStarted","Data":"1cf741f78fb72cb1fef124742845a0be120d6a41820bf49bd67d2f17bcdb3918"} Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.756768 4772 generic.go:334] "Generic (PLEG): container finished" podID="0283d234-d683-44b1-8e41-71fef0c61b16" containerID="5f39cf82eb73d431608d167b1731cb89f219694dbb8f97e35aed6537d64554f5" exitCode=0 Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.756921 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" event={"ID":"0283d234-d683-44b1-8e41-71fef0c61b16","Type":"ContainerDied","Data":"5f39cf82eb73d431608d167b1731cb89f219694dbb8f97e35aed6537d64554f5"} Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.760630 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fb59ccf89-j6w9l" event={"ID":"c946f8ec-d68e-4217-9076-b5746c9c8439","Type":"ContainerStarted","Data":"4d7307ae4b2ca1cbec646da925bbee9d29ccc9c798185593266be207831feafc"} Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.825408 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-bw59f"] Nov 28 11:24:50 crc kubenswrapper[4772]: W1128 11:24:50.829930 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b59c268_e1dd_41c3_bfbb_945870d0df3c.slice/crio-4f754200d615008e9636953e0bbb084e0823de9b5c4cf8e5bc5a1f3a98a35a66 WatchSource:0}: Error finding container 4f754200d615008e9636953e0bbb084e0823de9b5c4cf8e5bc5a1f3a98a35a66: Status 404 returned error can't find the container with id 4f754200d615008e9636953e0bbb084e0823de9b5c4cf8e5bc5a1f3a98a35a66 Nov 28 11:24:50 crc kubenswrapper[4772]: I1128 11:24:50.841833 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-bw59f"] Nov 28 11:24:51 crc kubenswrapper[4772]: I1128 11:24:51.772730 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"108f92bc-b212-4233-ad3c-2ce3c1d0cc99","Type":"ContainerStarted","Data":"c6014e925e5f186bc616ef5eb944c18eb07c24e83e05d75ba6d4dae02e0dbf76"} Nov 28 11:24:51 crc kubenswrapper[4772]: I1128 11:24:51.779481 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9585d4fc-mmppw" event={"ID":"9b59c268-e1dd-41c3-bfbb-945870d0df3c","Type":"ContainerStarted","Data":"4f754200d615008e9636953e0bbb084e0823de9b5c4cf8e5bc5a1f3a98a35a66"} Nov 28 11:24:51 crc kubenswrapper[4772]: I1128 11:24:51.783616 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7a9a8a44-19de-4233-acc9-a73d8aecea2a","Type":"ContainerStarted","Data":"41d5a534ab5c4e09435d2eb06698598da379e2e7794b3ea51681b1283181c9e0"} Nov 28 11:24:52 crc kubenswrapper[4772]: I1128 11:24:52.037068 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47c334e-2b64-403b-a682-0cb12b2c60d3" path="/var/lib/kubelet/pods/f47c334e-2b64-403b-a682-0cb12b2c60d3/volumes" Nov 28 11:24:52 crc kubenswrapper[4772]: I1128 11:24:52.809669 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" event={"ID":"0283d234-d683-44b1-8e41-71fef0c61b16","Type":"ContainerStarted","Data":"4b113a5a5fe573e7e8954abf02cea36ea05635597c728e25f732bb8c84072813"} Nov 28 11:24:52 crc kubenswrapper[4772]: I1128 11:24:52.810220 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:52 crc kubenswrapper[4772]: I1128 11:24:52.816098 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7a9a8a44-19de-4233-acc9-a73d8aecea2a","Type":"ContainerStarted","Data":"2a292d30d2afce21e1f6e0c729e5ae0fd4a237039ebd188dad4cf6f0ce07ee96"} Nov 28 11:24:52 crc kubenswrapper[4772]: I1128 11:24:52.816270 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7a9a8a44-19de-4233-acc9-a73d8aecea2a" containerName="glance-log" containerID="cri-o://41d5a534ab5c4e09435d2eb06698598da379e2e7794b3ea51681b1283181c9e0" gracePeriod=30 Nov 28 11:24:52 crc kubenswrapper[4772]: I1128 11:24:52.816392 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7a9a8a44-19de-4233-acc9-a73d8aecea2a" containerName="glance-httpd" containerID="cri-o://2a292d30d2afce21e1f6e0c729e5ae0fd4a237039ebd188dad4cf6f0ce07ee96" gracePeriod=30 Nov 28 11:24:52 crc kubenswrapper[4772]: I1128 11:24:52.844995 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" podStartSLOduration=5.844974135 podStartE2EDuration="5.844974135s" podCreationTimestamp="2025-11-28 11:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:24:52.835007293 +0000 UTC m=+1091.158250530" watchObservedRunningTime="2025-11-28 11:24:52.844974135 +0000 UTC m=+1091.168217362" Nov 28 11:24:52 crc kubenswrapper[4772]: I1128 11:24:52.872201 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.872173795 podStartE2EDuration="5.872173795s" podCreationTimestamp="2025-11-28 11:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:24:52.860403695 +0000 UTC m=+1091.183646942" watchObservedRunningTime="2025-11-28 11:24:52.872173795 +0000 UTC m=+1091.195417022" Nov 28 11:24:53 crc kubenswrapper[4772]: I1128 11:24:53.853255 4772 generic.go:334] "Generic (PLEG): container finished" podID="7a9a8a44-19de-4233-acc9-a73d8aecea2a" containerID="2a292d30d2afce21e1f6e0c729e5ae0fd4a237039ebd188dad4cf6f0ce07ee96" exitCode=0 Nov 28 11:24:53 crc kubenswrapper[4772]: I1128 11:24:53.854013 4772 generic.go:334] "Generic (PLEG): container finished" podID="7a9a8a44-19de-4233-acc9-a73d8aecea2a" containerID="41d5a534ab5c4e09435d2eb06698598da379e2e7794b3ea51681b1283181c9e0" exitCode=143 Nov 28 11:24:53 crc kubenswrapper[4772]: I1128 11:24:53.853334 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7a9a8a44-19de-4233-acc9-a73d8aecea2a","Type":"ContainerDied","Data":"2a292d30d2afce21e1f6e0c729e5ae0fd4a237039ebd188dad4cf6f0ce07ee96"} Nov 28 11:24:53 crc kubenswrapper[4772]: I1128 11:24:53.854629 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7a9a8a44-19de-4233-acc9-a73d8aecea2a","Type":"ContainerDied","Data":"41d5a534ab5c4e09435d2eb06698598da379e2e7794b3ea51681b1283181c9e0"} Nov 28 11:24:53 crc kubenswrapper[4772]: I1128 11:24:53.860733 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"108f92bc-b212-4233-ad3c-2ce3c1d0cc99","Type":"ContainerStarted","Data":"fa63497648ca972f02d07e0760a47f68c146e0f84c15e71193dd9f519bf34b10"} Nov 28 11:24:53 crc kubenswrapper[4772]: I1128 11:24:53.861010 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="108f92bc-b212-4233-ad3c-2ce3c1d0cc99" containerName="glance-log" containerID="cri-o://c6014e925e5f186bc616ef5eb944c18eb07c24e83e05d75ba6d4dae02e0dbf76" gracePeriod=30 Nov 28 11:24:53 crc kubenswrapper[4772]: I1128 11:24:53.861847 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="108f92bc-b212-4233-ad3c-2ce3c1d0cc99" containerName="glance-httpd" containerID="cri-o://fa63497648ca972f02d07e0760a47f68c146e0f84c15e71193dd9f519bf34b10" gracePeriod=30 Nov 28 11:24:53 crc kubenswrapper[4772]: I1128 11:24:53.885053 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.885011924 podStartE2EDuration="6.885011924s" podCreationTimestamp="2025-11-28 11:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:24:53.881092548 +0000 UTC m=+1092.204335805" watchObservedRunningTime="2025-11-28 11:24:53.885011924 +0000 UTC m=+1092.208255151" Nov 28 11:24:54 crc kubenswrapper[4772]: I1128 11:24:54.878545 4772 generic.go:334] "Generic (PLEG): container finished" podID="108f92bc-b212-4233-ad3c-2ce3c1d0cc99" containerID="fa63497648ca972f02d07e0760a47f68c146e0f84c15e71193dd9f519bf34b10" exitCode=0 Nov 28 11:24:54 crc kubenswrapper[4772]: I1128 11:24:54.878974 4772 generic.go:334] "Generic (PLEG): container finished" podID="108f92bc-b212-4233-ad3c-2ce3c1d0cc99" containerID="c6014e925e5f186bc616ef5eb944c18eb07c24e83e05d75ba6d4dae02e0dbf76" exitCode=143 Nov 28 11:24:54 crc kubenswrapper[4772]: I1128 11:24:54.878619 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"108f92bc-b212-4233-ad3c-2ce3c1d0cc99","Type":"ContainerDied","Data":"fa63497648ca972f02d07e0760a47f68c146e0f84c15e71193dd9f519bf34b10"} Nov 28 11:24:54 crc kubenswrapper[4772]: I1128 11:24:54.879022 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"108f92bc-b212-4233-ad3c-2ce3c1d0cc99","Type":"ContainerDied","Data":"c6014e925e5f186bc616ef5eb944c18eb07c24e83e05d75ba6d4dae02e0dbf76"} Nov 28 11:24:55 crc kubenswrapper[4772]: I1128 11:24:55.912878 4772 generic.go:334] "Generic (PLEG): container finished" podID="a89f9035-6c39-41ce-b7e4-31df46e662ce" containerID="76ffd2b9c18295f3f278a4a8d1b93bbca59982d3fdf63509e8d4bde76968d8c0" exitCode=0 Nov 28 11:24:55 crc kubenswrapper[4772]: I1128 11:24:55.913127 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7xjd9" event={"ID":"a89f9035-6c39-41ce-b7e4-31df46e662ce","Type":"ContainerDied","Data":"76ffd2b9c18295f3f278a4a8d1b93bbca59982d3fdf63509e8d4bde76968d8c0"} Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.109431 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6fb59ccf89-j6w9l"] Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.157288 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68fd445458-gvkkx"] Nov 28 11:24:56 crc kubenswrapper[4772]: E1128 11:24:56.158785 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47c334e-2b64-403b-a682-0cb12b2c60d3" containerName="init" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.158878 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47c334e-2b64-403b-a682-0cb12b2c60d3" containerName="init" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.159115 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47c334e-2b64-403b-a682-0cb12b2c60d3" containerName="init" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.160187 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.165797 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.210512 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68fd445458-gvkkx"] Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.246621 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f9585d4fc-mmppw"] Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.285196 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eabec09-9340-4cd4-a7db-ec957878a3a0-logs\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.285564 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-combined-ca-bundle\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.285769 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-horizon-secret-key\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.285859 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7eabec09-9340-4cd4-a7db-ec957878a3a0-config-data\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.286053 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-horizon-tls-certs\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.286148 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eabec09-9340-4cd4-a7db-ec957878a3a0-scripts\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.286245 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvkkp\" (UniqueName: \"kubernetes.io/projected/7eabec09-9340-4cd4-a7db-ec957878a3a0-kube-api-access-qvkkp\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.297186 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f99664784-xpqjq"] Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.299083 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.329394 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f99664784-xpqjq"] Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.388632 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eabec09-9340-4cd4-a7db-ec957878a3a0-scripts\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.388704 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvkkp\" (UniqueName: \"kubernetes.io/projected/7eabec09-9340-4cd4-a7db-ec957878a3a0-kube-api-access-qvkkp\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.388737 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a9ada7a-c788-41ad-87a6-431ba8c94394-scripts\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.389277 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3a9ada7a-c788-41ad-87a6-431ba8c94394-horizon-secret-key\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.389386 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eabec09-9340-4cd4-a7db-ec957878a3a0-logs\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.389417 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-horizon-secret-key\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.389433 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-combined-ca-bundle\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.389459 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7eabec09-9340-4cd4-a7db-ec957878a3a0-config-data\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.389483 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9ada7a-c788-41ad-87a6-431ba8c94394-combined-ca-bundle\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.389519 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnnm6\" (UniqueName: \"kubernetes.io/projected/3a9ada7a-c788-41ad-87a6-431ba8c94394-kube-api-access-dnnm6\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.389544 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a9ada7a-c788-41ad-87a6-431ba8c94394-config-data\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.389570 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a9ada7a-c788-41ad-87a6-431ba8c94394-logs\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.389592 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a9ada7a-c788-41ad-87a6-431ba8c94394-horizon-tls-certs\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.389619 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-horizon-tls-certs\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.389619 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eabec09-9340-4cd4-a7db-ec957878a3a0-scripts\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.390018 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eabec09-9340-4cd4-a7db-ec957878a3a0-logs\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.392308 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7eabec09-9340-4cd4-a7db-ec957878a3a0-config-data\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.398112 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-horizon-tls-certs\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.398570 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-horizon-secret-key\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.403500 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-combined-ca-bundle\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.413063 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvkkp\" (UniqueName: \"kubernetes.io/projected/7eabec09-9340-4cd4-a7db-ec957878a3a0-kube-api-access-qvkkp\") pod \"horizon-68fd445458-gvkkx\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.486024 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.491263 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a9ada7a-c788-41ad-87a6-431ba8c94394-config-data\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.491333 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a9ada7a-c788-41ad-87a6-431ba8c94394-logs\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.491386 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a9ada7a-c788-41ad-87a6-431ba8c94394-horizon-tls-certs\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.491471 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a9ada7a-c788-41ad-87a6-431ba8c94394-scripts\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.491507 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3a9ada7a-c788-41ad-87a6-431ba8c94394-horizon-secret-key\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.491614 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9ada7a-c788-41ad-87a6-431ba8c94394-combined-ca-bundle\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.491667 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnnm6\" (UniqueName: \"kubernetes.io/projected/3a9ada7a-c788-41ad-87a6-431ba8c94394-kube-api-access-dnnm6\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.492518 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a9ada7a-c788-41ad-87a6-431ba8c94394-logs\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.492973 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a9ada7a-c788-41ad-87a6-431ba8c94394-scripts\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.493079 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a9ada7a-c788-41ad-87a6-431ba8c94394-config-data\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.496464 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3a9ada7a-c788-41ad-87a6-431ba8c94394-horizon-secret-key\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.496532 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a9ada7a-c788-41ad-87a6-431ba8c94394-horizon-tls-certs\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.496966 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9ada7a-c788-41ad-87a6-431ba8c94394-combined-ca-bundle\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.510252 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnnm6\" (UniqueName: \"kubernetes.io/projected/3a9ada7a-c788-41ad-87a6-431ba8c94394-kube-api-access-dnnm6\") pod \"horizon-6f99664784-xpqjq\" (UID: \"3a9ada7a-c788-41ad-87a6-431ba8c94394\") " pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:56 crc kubenswrapper[4772]: I1128 11:24:56.619688 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:24:58 crc kubenswrapper[4772]: I1128 11:24:58.271988 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:24:58 crc kubenswrapper[4772]: I1128 11:24:58.344556 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-2wtm9"] Nov 28 11:24:58 crc kubenswrapper[4772]: I1128 11:24:58.344885 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" podUID="ee784ed1-af35-4cee-93fc-23d12c813dc6" containerName="dnsmasq-dns" containerID="cri-o://dbf958614ee83e249e47ffe11e9a2630f93158f35362cf381c4cbf54cd4f9ca9" gracePeriod=10 Nov 28 11:24:59 crc kubenswrapper[4772]: I1128 11:24:59.967458 4772 generic.go:334] "Generic (PLEG): container finished" podID="ee784ed1-af35-4cee-93fc-23d12c813dc6" containerID="dbf958614ee83e249e47ffe11e9a2630f93158f35362cf381c4cbf54cd4f9ca9" exitCode=0 Nov 28 11:24:59 crc kubenswrapper[4772]: I1128 11:24:59.967523 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" event={"ID":"ee784ed1-af35-4cee-93fc-23d12c813dc6","Type":"ContainerDied","Data":"dbf958614ee83e249e47ffe11e9a2630f93158f35362cf381c4cbf54cd4f9ca9"} Nov 28 11:25:00 crc kubenswrapper[4772]: I1128 11:25:00.686753 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" podUID="ee784ed1-af35-4cee-93fc-23d12c813dc6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Nov 28 11:25:04 crc kubenswrapper[4772]: E1128 11:25:04.222930 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 28 11:25:04 crc kubenswrapper[4772]: E1128 11:25:04.223713 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jw2gr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-krcf2_openstack(3d1f86f7-529a-4ed1-885d-1beb4c14b213): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:25:04 crc kubenswrapper[4772]: E1128 11:25:04.226536 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-krcf2" podUID="3d1f86f7-529a-4ed1-885d-1beb4c14b213" Nov 28 11:25:05 crc kubenswrapper[4772]: E1128 11:25:05.021719 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-krcf2" podUID="3d1f86f7-529a-4ed1-885d-1beb4c14b213" Nov 28 11:25:05 crc kubenswrapper[4772]: I1128 11:25:05.687077 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" podUID="ee784ed1-af35-4cee-93fc-23d12c813dc6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Nov 28 11:25:06 crc kubenswrapper[4772]: E1128 11:25:06.584658 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 28 11:25:06 crc kubenswrapper[4772]: E1128 11:25:06.584885 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n567hbbhf5hc4h645h5f7hb7h9bhb5h89h659h5b4h547h5c8hf9h5cch686hd7h5chdh5f7hb6h5fbh69h77h54ch99h658h8chfch77h68q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgw9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a8f47327-6cc4-4ca2-9363-0eca9d129686): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:25:06 crc kubenswrapper[4772]: E1128 11:25:06.595647 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 28 11:25:06 crc kubenswrapper[4772]: E1128 11:25:06.595835 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h65fh5d8hcch59h56h9dh8fh54fh696h5d4h5b8hf5h648hf5h596h588h67fh556h688h58h5f5h547h7bh5b7h548h655h59bh55dh5b8h5bch5fbq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lltxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6fb59ccf89-j6w9l_openstack(c946f8ec-d68e-4217-9076-b5746c9c8439): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:25:06 crc kubenswrapper[4772]: E1128 11:25:06.600012 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6fb59ccf89-j6w9l" podUID="c946f8ec-d68e-4217-9076-b5746c9c8439" Nov 28 11:25:06 crc kubenswrapper[4772]: E1128 11:25:06.614810 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 28 11:25:06 crc kubenswrapper[4772]: E1128 11:25:06.615064 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n568hd4h6ch696h64fh548h5c5hcfh54bh85h59ch666h55fh74h567hch9ch5fdh4h6bh5dh59bh699hc9hc9h586h674h586h5cch679h9ch58q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kzws7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f9585d4fc-mmppw_openstack(9b59c268-e1dd-41c3-bfbb-945870d0df3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:25:06 crc kubenswrapper[4772]: E1128 11:25:06.618322 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-f9585d4fc-mmppw" podUID="9b59c268-e1dd-41c3-bfbb-945870d0df3c" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.683846 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.712834 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.720996 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.747426 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-scripts\") pod \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.747552 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49kfv\" (UniqueName: \"kubernetes.io/projected/7a9a8a44-19de-4233-acc9-a73d8aecea2a-kube-api-access-49kfv\") pod \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.747731 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9a8a44-19de-4233-acc9-a73d8aecea2a-logs\") pod \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.747782 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-internal-tls-certs\") pod \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.747826 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-config-data\") pod \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.747963 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-combined-ca-bundle\") pod \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.748059 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a9a8a44-19de-4233-acc9-a73d8aecea2a-httpd-run\") pod \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.748183 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\" (UID: \"7a9a8a44-19de-4233-acc9-a73d8aecea2a\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.752938 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a9a8a44-19de-4233-acc9-a73d8aecea2a-logs" (OuterVolumeSpecName: "logs") pod "7a9a8a44-19de-4233-acc9-a73d8aecea2a" (UID: "7a9a8a44-19de-4233-acc9-a73d8aecea2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.753396 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a9a8a44-19de-4233-acc9-a73d8aecea2a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7a9a8a44-19de-4233-acc9-a73d8aecea2a" (UID: "7a9a8a44-19de-4233-acc9-a73d8aecea2a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.763574 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "7a9a8a44-19de-4233-acc9-a73d8aecea2a" (UID: "7a9a8a44-19de-4233-acc9-a73d8aecea2a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.798377 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-scripts" (OuterVolumeSpecName: "scripts") pod "7a9a8a44-19de-4233-acc9-a73d8aecea2a" (UID: "7a9a8a44-19de-4233-acc9-a73d8aecea2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.799031 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9a8a44-19de-4233-acc9-a73d8aecea2a-kube-api-access-49kfv" (OuterVolumeSpecName: "kube-api-access-49kfv") pod "7a9a8a44-19de-4233-acc9-a73d8aecea2a" (UID: "7a9a8a44-19de-4233-acc9-a73d8aecea2a"). InnerVolumeSpecName "kube-api-access-49kfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.818032 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a9a8a44-19de-4233-acc9-a73d8aecea2a" (UID: "7a9a8a44-19de-4233-acc9-a73d8aecea2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.838436 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-config-data" (OuterVolumeSpecName: "config-data") pod "7a9a8a44-19de-4233-acc9-a73d8aecea2a" (UID: "7a9a8a44-19de-4233-acc9-a73d8aecea2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.843184 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7a9a8a44-19de-4233-acc9-a73d8aecea2a" (UID: "7a9a8a44-19de-4233-acc9-a73d8aecea2a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.853689 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.853780 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-fernet-keys\") pod \"a89f9035-6c39-41ce-b7e4-31df46e662ce\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.853837 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-combined-ca-bundle\") pod \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.853883 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq6qp\" (UniqueName: \"kubernetes.io/projected/a89f9035-6c39-41ce-b7e4-31df46e662ce-kube-api-access-sq6qp\") pod \"a89f9035-6c39-41ce-b7e4-31df46e662ce\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.853918 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-logs\") pod \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.853938 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-public-tls-certs\") pod \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.854006 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-scripts\") pod \"a89f9035-6c39-41ce-b7e4-31df46e662ce\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.854048 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-httpd-run\") pod \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.854078 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-credential-keys\") pod \"a89f9035-6c39-41ce-b7e4-31df46e662ce\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.854110 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-config-data\") pod \"a89f9035-6c39-41ce-b7e4-31df46e662ce\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.854160 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-config-data\") pod \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.854187 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czs2h\" (UniqueName: \"kubernetes.io/projected/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-kube-api-access-czs2h\") pod \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.854210 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-scripts\") pod \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\" (UID: \"108f92bc-b212-4233-ad3c-2ce3c1d0cc99\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.854256 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-combined-ca-bundle\") pod \"a89f9035-6c39-41ce-b7e4-31df46e662ce\" (UID: \"a89f9035-6c39-41ce-b7e4-31df46e662ce\") " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.854953 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9a8a44-19de-4233-acc9-a73d8aecea2a-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.854970 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.854981 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.854992 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.855001 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a9a8a44-19de-4233-acc9-a73d8aecea2a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.855034 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.855043 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9a8a44-19de-4233-acc9-a73d8aecea2a-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.855057 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49kfv\" (UniqueName: \"kubernetes.io/projected/7a9a8a44-19de-4233-acc9-a73d8aecea2a-kube-api-access-49kfv\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.856113 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-logs" (OuterVolumeSpecName: "logs") pod "108f92bc-b212-4233-ad3c-2ce3c1d0cc99" (UID: "108f92bc-b212-4233-ad3c-2ce3c1d0cc99"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.857646 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "108f92bc-b212-4233-ad3c-2ce3c1d0cc99" (UID: "108f92bc-b212-4233-ad3c-2ce3c1d0cc99"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.858175 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "108f92bc-b212-4233-ad3c-2ce3c1d0cc99" (UID: "108f92bc-b212-4233-ad3c-2ce3c1d0cc99"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.861245 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-scripts" (OuterVolumeSpecName: "scripts") pod "108f92bc-b212-4233-ad3c-2ce3c1d0cc99" (UID: "108f92bc-b212-4233-ad3c-2ce3c1d0cc99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.865555 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-scripts" (OuterVolumeSpecName: "scripts") pod "a89f9035-6c39-41ce-b7e4-31df46e662ce" (UID: "a89f9035-6c39-41ce-b7e4-31df46e662ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.865542 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a89f9035-6c39-41ce-b7e4-31df46e662ce" (UID: "a89f9035-6c39-41ce-b7e4-31df46e662ce"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.866801 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-kube-api-access-czs2h" (OuterVolumeSpecName: "kube-api-access-czs2h") pod "108f92bc-b212-4233-ad3c-2ce3c1d0cc99" (UID: "108f92bc-b212-4233-ad3c-2ce3c1d0cc99"). InnerVolumeSpecName "kube-api-access-czs2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.878614 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a89f9035-6c39-41ce-b7e4-31df46e662ce-kube-api-access-sq6qp" (OuterVolumeSpecName: "kube-api-access-sq6qp") pod "a89f9035-6c39-41ce-b7e4-31df46e662ce" (UID: "a89f9035-6c39-41ce-b7e4-31df46e662ce"). InnerVolumeSpecName "kube-api-access-sq6qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.918001 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.920566 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a89f9035-6c39-41ce-b7e4-31df46e662ce" (UID: "a89f9035-6c39-41ce-b7e4-31df46e662ce"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.928290 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "108f92bc-b212-4233-ad3c-2ce3c1d0cc99" (UID: "108f92bc-b212-4233-ad3c-2ce3c1d0cc99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.929435 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-config-data" (OuterVolumeSpecName: "config-data") pod "a89f9035-6c39-41ce-b7e4-31df46e662ce" (UID: "a89f9035-6c39-41ce-b7e4-31df46e662ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.944234 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a89f9035-6c39-41ce-b7e4-31df46e662ce" (UID: "a89f9035-6c39-41ce-b7e4-31df46e662ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.951020 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "108f92bc-b212-4233-ad3c-2ce3c1d0cc99" (UID: "108f92bc-b212-4233-ad3c-2ce3c1d0cc99"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.953512 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-config-data" (OuterVolumeSpecName: "config-data") pod "108f92bc-b212-4233-ad3c-2ce3c1d0cc99" (UID: "108f92bc-b212-4233-ad3c-2ce3c1d0cc99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.956942 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.956993 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957041 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957053 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957064 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957074 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq6qp\" (UniqueName: \"kubernetes.io/projected/a89f9035-6c39-41ce-b7e4-31df46e662ce-kube-api-access-sq6qp\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957089 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957098 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957106 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957118 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957127 4772 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957139 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89f9035-6c39-41ce-b7e4-31df46e662ce-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957148 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957157 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czs2h\" (UniqueName: \"kubernetes.io/projected/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-kube-api-access-czs2h\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.957167 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108f92bc-b212-4233-ad3c-2ce3c1d0cc99-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:06 crc kubenswrapper[4772]: I1128 11:25:06.979845 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.042576 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7a9a8a44-19de-4233-acc9-a73d8aecea2a","Type":"ContainerDied","Data":"8ec042bf9195f73d5af2e76e6258960c21f8a8e2d8a28384e0e9d6bf6df552c5"} Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.042641 4772 scope.go:117] "RemoveContainer" containerID="2a292d30d2afce21e1f6e0c729e5ae0fd4a237039ebd188dad4cf6f0ce07ee96" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.044032 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.052189 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"108f92bc-b212-4233-ad3c-2ce3c1d0cc99","Type":"ContainerDied","Data":"1cf741f78fb72cb1fef124742845a0be120d6a41820bf49bd67d2f17bcdb3918"} Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.052199 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.055811 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7xjd9" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.057296 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7xjd9" event={"ID":"a89f9035-6c39-41ce-b7e4-31df46e662ce","Type":"ContainerDied","Data":"d5f3945f503b971fd8ae0ca145854ade3143c93bc7bda34205b5006563325a2d"} Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.057328 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5f3945f503b971fd8ae0ca145854ade3143c93bc7bda34205b5006563325a2d" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.058733 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.190092 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.218944 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.228398 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.243824 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.257086 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:25:07 crc kubenswrapper[4772]: E1128 11:25:07.257813 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108f92bc-b212-4233-ad3c-2ce3c1d0cc99" containerName="glance-httpd" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.257838 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="108f92bc-b212-4233-ad3c-2ce3c1d0cc99" containerName="glance-httpd" Nov 28 11:25:07 crc kubenswrapper[4772]: E1128 11:25:07.257852 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9a8a44-19de-4233-acc9-a73d8aecea2a" containerName="glance-log" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.257860 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9a8a44-19de-4233-acc9-a73d8aecea2a" containerName="glance-log" Nov 28 11:25:07 crc kubenswrapper[4772]: E1128 11:25:07.257883 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9a8a44-19de-4233-acc9-a73d8aecea2a" containerName="glance-httpd" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.257889 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9a8a44-19de-4233-acc9-a73d8aecea2a" containerName="glance-httpd" Nov 28 11:25:07 crc kubenswrapper[4772]: E1128 11:25:07.257906 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a89f9035-6c39-41ce-b7e4-31df46e662ce" containerName="keystone-bootstrap" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.257913 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a89f9035-6c39-41ce-b7e4-31df46e662ce" containerName="keystone-bootstrap" Nov 28 11:25:07 crc kubenswrapper[4772]: E1128 11:25:07.257923 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108f92bc-b212-4233-ad3c-2ce3c1d0cc99" containerName="glance-log" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.257929 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="108f92bc-b212-4233-ad3c-2ce3c1d0cc99" containerName="glance-log" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.258135 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="108f92bc-b212-4233-ad3c-2ce3c1d0cc99" containerName="glance-httpd" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.258147 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="108f92bc-b212-4233-ad3c-2ce3c1d0cc99" containerName="glance-log" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.258163 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9a8a44-19de-4233-acc9-a73d8aecea2a" containerName="glance-log" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.258179 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9a8a44-19de-4233-acc9-a73d8aecea2a" containerName="glance-httpd" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.258186 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a89f9035-6c39-41ce-b7e4-31df46e662ce" containerName="keystone-bootstrap" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.259577 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.263045 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vv5mt" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.263408 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.263556 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.265190 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.292793 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.303023 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.308183 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.311838 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.317691 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.333105 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.363252 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.363313 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7822\" (UniqueName: \"kubernetes.io/projected/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-kube-api-access-m7822\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.363374 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-logs\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.363420 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-scripts\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.363454 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.363487 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-config-data\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.363510 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.363536 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.465437 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.465512 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.465548 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.465603 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7822\" (UniqueName: \"kubernetes.io/projected/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-kube-api-access-m7822\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.465662 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-logs\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.465895 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-scripts\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.465976 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.466002 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.466069 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h52rs\" (UniqueName: \"kubernetes.io/projected/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-kube-api-access-h52rs\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.466105 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.466214 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-logs\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.466221 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-config-data\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.466289 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.466316 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.466339 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.466386 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.466443 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-logs\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.466467 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.468138 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.478214 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-scripts\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.478401 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.478734 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.479766 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-config-data\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.490718 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7822\" (UniqueName: \"kubernetes.io/projected/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-kube-api-access-m7822\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.511423 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.569670 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.569803 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.569830 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h52rs\" (UniqueName: \"kubernetes.io/projected/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-kube-api-access-h52rs\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.569884 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.569904 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.569922 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.569947 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-logs\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.569986 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.572854 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.573280 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.573517 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.573850 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-logs\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.575967 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.576999 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.577209 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.595176 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h52rs\" (UniqueName: \"kubernetes.io/projected/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-kube-api-access-h52rs\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.612489 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.664612 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.858396 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7xjd9"] Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.868054 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7xjd9"] Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.908161 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.949318 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-76nzr"] Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.953895 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.956386 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.957931 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.958148 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.958334 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4j4h4" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.958499 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 11:25:07 crc kubenswrapper[4772]: I1128 11:25:07.960842 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-76nzr"] Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.023225 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="108f92bc-b212-4233-ad3c-2ce3c1d0cc99" path="/var/lib/kubelet/pods/108f92bc-b212-4233-ad3c-2ce3c1d0cc99/volumes" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.024315 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9a8a44-19de-4233-acc9-a73d8aecea2a" path="/var/lib/kubelet/pods/7a9a8a44-19de-4233-acc9-a73d8aecea2a/volumes" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.024978 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a89f9035-6c39-41ce-b7e4-31df46e662ce" path="/var/lib/kubelet/pods/a89f9035-6c39-41ce-b7e4-31df46e662ce/volumes" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.103537 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-fernet-keys\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.103638 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-scripts\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.103678 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-config-data\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.103858 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-credential-keys\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.103924 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-combined-ca-bundle\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.104140 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62pk6\" (UniqueName: \"kubernetes.io/projected/e46680be-1091-4b51-a858-2365af24a086-kube-api-access-62pk6\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.206217 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-credential-keys\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.209328 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-combined-ca-bundle\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.209518 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62pk6\" (UniqueName: \"kubernetes.io/projected/e46680be-1091-4b51-a858-2365af24a086-kube-api-access-62pk6\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.209578 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-fernet-keys\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.209689 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-scripts\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.209736 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-config-data\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.211692 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-credential-keys\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.215412 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-config-data\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.215493 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-scripts\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.215437 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-combined-ca-bundle\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.219856 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-fernet-keys\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.235772 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62pk6\" (UniqueName: \"kubernetes.io/projected/e46680be-1091-4b51-a858-2365af24a086-kube-api-access-62pk6\") pod \"keystone-bootstrap-76nzr\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:08 crc kubenswrapper[4772]: I1128 11:25:08.331634 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:14 crc kubenswrapper[4772]: I1128 11:25:14.906904 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.074528 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c946f8ec-d68e-4217-9076-b5746c9c8439-logs\") pod \"c946f8ec-d68e-4217-9076-b5746c9c8439\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.074937 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c946f8ec-d68e-4217-9076-b5746c9c8439-config-data\") pod \"c946f8ec-d68e-4217-9076-b5746c9c8439\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.074981 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c946f8ec-d68e-4217-9076-b5746c9c8439-logs" (OuterVolumeSpecName: "logs") pod "c946f8ec-d68e-4217-9076-b5746c9c8439" (UID: "c946f8ec-d68e-4217-9076-b5746c9c8439"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.075016 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c946f8ec-d68e-4217-9076-b5746c9c8439-scripts\") pod \"c946f8ec-d68e-4217-9076-b5746c9c8439\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.075095 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c946f8ec-d68e-4217-9076-b5746c9c8439-horizon-secret-key\") pod \"c946f8ec-d68e-4217-9076-b5746c9c8439\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.075141 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lltxz\" (UniqueName: \"kubernetes.io/projected/c946f8ec-d68e-4217-9076-b5746c9c8439-kube-api-access-lltxz\") pod \"c946f8ec-d68e-4217-9076-b5746c9c8439\" (UID: \"c946f8ec-d68e-4217-9076-b5746c9c8439\") " Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.075629 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c946f8ec-d68e-4217-9076-b5746c9c8439-config-data" (OuterVolumeSpecName: "config-data") pod "c946f8ec-d68e-4217-9076-b5746c9c8439" (UID: "c946f8ec-d68e-4217-9076-b5746c9c8439"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.076570 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c946f8ec-d68e-4217-9076-b5746c9c8439-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.076613 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c946f8ec-d68e-4217-9076-b5746c9c8439-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.077646 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c946f8ec-d68e-4217-9076-b5746c9c8439-scripts" (OuterVolumeSpecName: "scripts") pod "c946f8ec-d68e-4217-9076-b5746c9c8439" (UID: "c946f8ec-d68e-4217-9076-b5746c9c8439"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.083670 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c946f8ec-d68e-4217-9076-b5746c9c8439-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c946f8ec-d68e-4217-9076-b5746c9c8439" (UID: "c946f8ec-d68e-4217-9076-b5746c9c8439"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.084712 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c946f8ec-d68e-4217-9076-b5746c9c8439-kube-api-access-lltxz" (OuterVolumeSpecName: "kube-api-access-lltxz") pod "c946f8ec-d68e-4217-9076-b5746c9c8439" (UID: "c946f8ec-d68e-4217-9076-b5746c9c8439"). InnerVolumeSpecName "kube-api-access-lltxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.141132 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fb59ccf89-j6w9l" event={"ID":"c946f8ec-d68e-4217-9076-b5746c9c8439","Type":"ContainerDied","Data":"4d7307ae4b2ca1cbec646da925bbee9d29ccc9c798185593266be207831feafc"} Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.141210 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fb59ccf89-j6w9l" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.186843 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c946f8ec-d68e-4217-9076-b5746c9c8439-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.186882 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c946f8ec-d68e-4217-9076-b5746c9c8439-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.186893 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lltxz\" (UniqueName: \"kubernetes.io/projected/c946f8ec-d68e-4217-9076-b5746c9c8439-kube-api-access-lltxz\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.208956 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6fb59ccf89-j6w9l"] Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.219912 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6fb59ccf89-j6w9l"] Nov 28 11:25:15 crc kubenswrapper[4772]: E1128 11:25:15.634282 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 28 11:25:15 crc kubenswrapper[4772]: E1128 11:25:15.634553 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shjcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-j9ck9_openstack(59e2ef4f-2f84-4b24-9a30-27fea471faf5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:25:15 crc kubenswrapper[4772]: E1128 11:25:15.635808 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-j9ck9" podUID="59e2ef4f-2f84-4b24-9a30-27fea471faf5" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.686918 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" podUID="ee784ed1-af35-4cee-93fc-23d12c813dc6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.687575 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.738983 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.798985 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b59c268-e1dd-41c3-bfbb-945870d0df3c-horizon-secret-key\") pod \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.799039 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b59c268-e1dd-41c3-bfbb-945870d0df3c-config-data\") pod \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.799072 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzws7\" (UniqueName: \"kubernetes.io/projected/9b59c268-e1dd-41c3-bfbb-945870d0df3c-kube-api-access-kzws7\") pod \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.799820 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b59c268-e1dd-41c3-bfbb-945870d0df3c-config-data" (OuterVolumeSpecName: "config-data") pod "9b59c268-e1dd-41c3-bfbb-945870d0df3c" (UID: "9b59c268-e1dd-41c3-bfbb-945870d0df3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.805005 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b59c268-e1dd-41c3-bfbb-945870d0df3c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9b59c268-e1dd-41c3-bfbb-945870d0df3c" (UID: "9b59c268-e1dd-41c3-bfbb-945870d0df3c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.806554 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b59c268-e1dd-41c3-bfbb-945870d0df3c-kube-api-access-kzws7" (OuterVolumeSpecName: "kube-api-access-kzws7") pod "9b59c268-e1dd-41c3-bfbb-945870d0df3c" (UID: "9b59c268-e1dd-41c3-bfbb-945870d0df3c"). InnerVolumeSpecName "kube-api-access-kzws7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.900652 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b59c268-e1dd-41c3-bfbb-945870d0df3c-scripts\") pod \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.901021 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b59c268-e1dd-41c3-bfbb-945870d0df3c-logs\") pod \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\" (UID: \"9b59c268-e1dd-41c3-bfbb-945870d0df3c\") " Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.901434 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b59c268-e1dd-41c3-bfbb-945870d0df3c-logs" (OuterVolumeSpecName: "logs") pod "9b59c268-e1dd-41c3-bfbb-945870d0df3c" (UID: "9b59c268-e1dd-41c3-bfbb-945870d0df3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.901653 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b59c268-e1dd-41c3-bfbb-945870d0df3c-scripts" (OuterVolumeSpecName: "scripts") pod "9b59c268-e1dd-41c3-bfbb-945870d0df3c" (UID: "9b59c268-e1dd-41c3-bfbb-945870d0df3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.901771 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b59c268-e1dd-41c3-bfbb-945870d0df3c-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.901791 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b59c268-e1dd-41c3-bfbb-945870d0df3c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.901805 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b59c268-e1dd-41c3-bfbb-945870d0df3c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:15 crc kubenswrapper[4772]: I1128 11:25:15.901818 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzws7\" (UniqueName: \"kubernetes.io/projected/9b59c268-e1dd-41c3-bfbb-945870d0df3c-kube-api-access-kzws7\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:16 crc kubenswrapper[4772]: I1128 11:25:16.003740 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b59c268-e1dd-41c3-bfbb-945870d0df3c-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:16 crc kubenswrapper[4772]: I1128 11:25:16.014489 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c946f8ec-d68e-4217-9076-b5746c9c8439" path="/var/lib/kubelet/pods/c946f8ec-d68e-4217-9076-b5746c9c8439/volumes" Nov 28 11:25:16 crc kubenswrapper[4772]: I1128 11:25:16.158487 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9585d4fc-mmppw" event={"ID":"9b59c268-e1dd-41c3-bfbb-945870d0df3c","Type":"ContainerDied","Data":"4f754200d615008e9636953e0bbb084e0823de9b5c4cf8e5bc5a1f3a98a35a66"} Nov 28 11:25:16 crc kubenswrapper[4772]: I1128 11:25:16.158581 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9585d4fc-mmppw" Nov 28 11:25:16 crc kubenswrapper[4772]: E1128 11:25:16.161223 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-j9ck9" podUID="59e2ef4f-2f84-4b24-9a30-27fea471faf5" Nov 28 11:25:16 crc kubenswrapper[4772]: I1128 11:25:16.241561 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f9585d4fc-mmppw"] Nov 28 11:25:16 crc kubenswrapper[4772]: I1128 11:25:16.255042 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f9585d4fc-mmppw"] Nov 28 11:25:16 crc kubenswrapper[4772]: I1128 11:25:16.969536 4772 scope.go:117] "RemoveContainer" containerID="41d5a534ab5c4e09435d2eb06698598da379e2e7794b3ea51681b1283181c9e0" Nov 28 11:25:16 crc kubenswrapper[4772]: E1128 11:25:16.969886 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 28 11:25:16 crc kubenswrapper[4772]: E1128 11:25:16.970281 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwclr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-72xt2_openstack(2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:25:16 crc kubenswrapper[4772]: E1128 11:25:16.971668 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-72xt2" podUID="2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.052276 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.185125 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" event={"ID":"ee784ed1-af35-4cee-93fc-23d12c813dc6","Type":"ContainerDied","Data":"012037eb94f0d37110f3bd7cedff722654a14d7f054c5e0f3da11052b9514c27"} Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.185236 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" Nov 28 11:25:17 crc kubenswrapper[4772]: E1128 11:25:17.198625 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-72xt2" podUID="2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.231874 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-dns-swift-storage-0\") pod \"ee784ed1-af35-4cee-93fc-23d12c813dc6\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.231968 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-ovsdbserver-sb\") pod \"ee784ed1-af35-4cee-93fc-23d12c813dc6\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.231998 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-config\") pod \"ee784ed1-af35-4cee-93fc-23d12c813dc6\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.232016 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qfl8\" (UniqueName: \"kubernetes.io/projected/ee784ed1-af35-4cee-93fc-23d12c813dc6-kube-api-access-7qfl8\") pod \"ee784ed1-af35-4cee-93fc-23d12c813dc6\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.232041 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-dns-svc\") pod \"ee784ed1-af35-4cee-93fc-23d12c813dc6\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.232212 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-ovsdbserver-nb\") pod \"ee784ed1-af35-4cee-93fc-23d12c813dc6\" (UID: \"ee784ed1-af35-4cee-93fc-23d12c813dc6\") " Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.260644 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee784ed1-af35-4cee-93fc-23d12c813dc6-kube-api-access-7qfl8" (OuterVolumeSpecName: "kube-api-access-7qfl8") pod "ee784ed1-af35-4cee-93fc-23d12c813dc6" (UID: "ee784ed1-af35-4cee-93fc-23d12c813dc6"). InnerVolumeSpecName "kube-api-access-7qfl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.290778 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ee784ed1-af35-4cee-93fc-23d12c813dc6" (UID: "ee784ed1-af35-4cee-93fc-23d12c813dc6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.307163 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-config" (OuterVolumeSpecName: "config") pod "ee784ed1-af35-4cee-93fc-23d12c813dc6" (UID: "ee784ed1-af35-4cee-93fc-23d12c813dc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.310842 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee784ed1-af35-4cee-93fc-23d12c813dc6" (UID: "ee784ed1-af35-4cee-93fc-23d12c813dc6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.312464 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ee784ed1-af35-4cee-93fc-23d12c813dc6" (UID: "ee784ed1-af35-4cee-93fc-23d12c813dc6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.322570 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ee784ed1-af35-4cee-93fc-23d12c813dc6" (UID: "ee784ed1-af35-4cee-93fc-23d12c813dc6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.334241 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.334283 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.334294 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.334305 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qfl8\" (UniqueName: \"kubernetes.io/projected/ee784ed1-af35-4cee-93fc-23d12c813dc6-kube-api-access-7qfl8\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.334318 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.334328 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee784ed1-af35-4cee-93fc-23d12c813dc6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.468347 4772 scope.go:117] "RemoveContainer" containerID="fa63497648ca972f02d07e0760a47f68c146e0f84c15e71193dd9f519bf34b10" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.531718 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f99664784-xpqjq"] Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.543104 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-2wtm9"] Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.548644 4772 scope.go:117] "RemoveContainer" containerID="c6014e925e5f186bc616ef5eb944c18eb07c24e83e05d75ba6d4dae02e0dbf76" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.555346 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-2wtm9"] Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.610730 4772 scope.go:117] "RemoveContainer" containerID="dbf958614ee83e249e47ffe11e9a2630f93158f35362cf381c4cbf54cd4f9ca9" Nov 28 11:25:17 crc kubenswrapper[4772]: W1128 11:25:17.615938 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a9ada7a_c788_41ad_87a6_431ba8c94394.slice/crio-5eba631f40b3a52b3f23892e8040b72fbe80e4ebe0def0e7c363760392a7dfb5 WatchSource:0}: Error finding container 5eba631f40b3a52b3f23892e8040b72fbe80e4ebe0def0e7c363760392a7dfb5: Status 404 returned error can't find the container with id 5eba631f40b3a52b3f23892e8040b72fbe80e4ebe0def0e7c363760392a7dfb5 Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.638476 4772 scope.go:117] "RemoveContainer" containerID="3faea0f432c35cca5d973248386014d0e4a7d8ba9f3c358c9a0245ab4355fa28" Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.939875 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68fd445458-gvkkx"] Nov 28 11:25:17 crc kubenswrapper[4772]: W1128 11:25:17.948089 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7eabec09_9340_4cd4_a7db_ec957878a3a0.slice/crio-f6d078ee348d24c33590db4fdf786727a35d67fb7ffa3040b83af2940435c09b WatchSource:0}: Error finding container f6d078ee348d24c33590db4fdf786727a35d67fb7ffa3040b83af2940435c09b: Status 404 returned error can't find the container with id f6d078ee348d24c33590db4fdf786727a35d67fb7ffa3040b83af2940435c09b Nov 28 11:25:17 crc kubenswrapper[4772]: I1128 11:25:17.949426 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-76nzr"] Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.006885 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b59c268-e1dd-41c3-bfbb-945870d0df3c" path="/var/lib/kubelet/pods/9b59c268-e1dd-41c3-bfbb-945870d0df3c/volumes" Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.007532 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee784ed1-af35-4cee-93fc-23d12c813dc6" path="/var/lib/kubelet/pods/ee784ed1-af35-4cee-93fc-23d12c813dc6/volumes" Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.192934 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.214305 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-76nzr" event={"ID":"e46680be-1091-4b51-a858-2365af24a086","Type":"ContainerStarted","Data":"68075868569e81d85e2d4d1da830cfe56c9b3636bb11251ce620d297977239fa"} Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.216586 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8f47327-6cc4-4ca2-9363-0eca9d129686","Type":"ContainerStarted","Data":"b6e99f5d4c66fece3ff14fcba2bc904f668bfa1de22030db8081c41b707b74fb"} Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.221299 4772 generic.go:334] "Generic (PLEG): container finished" podID="31c75297-c867-4c84-8183-239f47947895" containerID="84264bb416fd01c01bb7e11ef9f7e9b973a10f012c5a264cf6c2b9a4fdae3df5" exitCode=0 Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.221369 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gkxww" event={"ID":"31c75297-c867-4c84-8183-239f47947895","Type":"ContainerDied","Data":"84264bb416fd01c01bb7e11ef9f7e9b973a10f012c5a264cf6c2b9a4fdae3df5"} Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.227281 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f99664784-xpqjq" event={"ID":"3a9ada7a-c788-41ad-87a6-431ba8c94394","Type":"ContainerStarted","Data":"cf67656ef9fc67ab268ce541422a1e037889432d44ae6b537de9229c1edd9aac"} Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.227348 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f99664784-xpqjq" event={"ID":"3a9ada7a-c788-41ad-87a6-431ba8c94394","Type":"ContainerStarted","Data":"5eba631f40b3a52b3f23892e8040b72fbe80e4ebe0def0e7c363760392a7dfb5"} Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.232134 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75774664c5-v7rms" event={"ID":"d7a16df0-9676-4d80-adc5-305fe795deb7","Type":"ContainerStarted","Data":"365a66e205b855db7d91d2c5985b9173a8efac7f9192c3919b01914f0b4d98cc"} Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.232191 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75774664c5-v7rms" event={"ID":"d7a16df0-9676-4d80-adc5-305fe795deb7","Type":"ContainerStarted","Data":"3b112d920152df95b59eecdfa5294a9cd6eb81355d916e94bc8804261cf839e7"} Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.232193 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75774664c5-v7rms" podUID="d7a16df0-9676-4d80-adc5-305fe795deb7" containerName="horizon-log" containerID="cri-o://3b112d920152df95b59eecdfa5294a9cd6eb81355d916e94bc8804261cf839e7" gracePeriod=30 Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.232215 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75774664c5-v7rms" podUID="d7a16df0-9676-4d80-adc5-305fe795deb7" containerName="horizon" containerID="cri-o://365a66e205b855db7d91d2c5985b9173a8efac7f9192c3919b01914f0b4d98cc" gracePeriod=30 Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.251844 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68fd445458-gvkkx" event={"ID":"7eabec09-9340-4cd4-a7db-ec957878a3a0","Type":"ContainerStarted","Data":"f6d078ee348d24c33590db4fdf786727a35d67fb7ffa3040b83af2940435c09b"} Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.270069 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75774664c5-v7rms" podStartSLOduration=2.807490676 podStartE2EDuration="31.270033539s" podCreationTimestamp="2025-11-28 11:24:47 +0000 UTC" firstStartedPulling="2025-11-28 11:24:48.442032816 +0000 UTC m=+1086.765276043" lastFinishedPulling="2025-11-28 11:25:16.904575679 +0000 UTC m=+1115.227818906" observedRunningTime="2025-11-28 11:25:18.265903251 +0000 UTC m=+1116.589146478" watchObservedRunningTime="2025-11-28 11:25:18.270033539 +0000 UTC m=+1116.593276786" Nov 28 11:25:18 crc kubenswrapper[4772]: I1128 11:25:18.887718 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:25:18 crc kubenswrapper[4772]: W1128 11:25:18.903382 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc9ee62a_cab5_4cac_afa0_18c776d8bab8.slice/crio-19d96e679f631f4a084ff56c8e73139c0f452eb1ad4c8c95a97cc1ed6a3a8d95 WatchSource:0}: Error finding container 19d96e679f631f4a084ff56c8e73139c0f452eb1ad4c8c95a97cc1ed6a3a8d95: Status 404 returned error can't find the container with id 19d96e679f631f4a084ff56c8e73139c0f452eb1ad4c8c95a97cc1ed6a3a8d95 Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.290894 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-76nzr" event={"ID":"e46680be-1091-4b51-a858-2365af24a086","Type":"ContainerStarted","Data":"4c3f2ad0c3c4bc426a6476bf20847d23e2b08a70dcb35d573dc0c522ef0f15df"} Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.314723 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68fd445458-gvkkx" event={"ID":"7eabec09-9340-4cd4-a7db-ec957878a3a0","Type":"ContainerStarted","Data":"13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a"} Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.314795 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68fd445458-gvkkx" event={"ID":"7eabec09-9340-4cd4-a7db-ec957878a3a0","Type":"ContainerStarted","Data":"8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5"} Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.320478 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f99664784-xpqjq" event={"ID":"3a9ada7a-c788-41ad-87a6-431ba8c94394","Type":"ContainerStarted","Data":"3c56ba4bf5bc0cf72c234567ee1b01d3b50d6ecc486c26ddee1b1fd27bfb46b9"} Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.322140 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc9ee62a-cab5-4cac-afa0-18c776d8bab8","Type":"ContainerStarted","Data":"19d96e679f631f4a084ff56c8e73139c0f452eb1ad4c8c95a97cc1ed6a3a8d95"} Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.332943 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-76nzr" podStartSLOduration=12.332912397 podStartE2EDuration="12.332912397s" podCreationTimestamp="2025-11-28 11:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:19.320054452 +0000 UTC m=+1117.643297689" watchObservedRunningTime="2025-11-28 11:25:19.332912397 +0000 UTC m=+1117.656155624" Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.333972 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211","Type":"ContainerStarted","Data":"b3a3c15f75f1dd0d8ea8698890c19ce9db97fb2b51f97f6d52d3484996e6a097"} Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.334068 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211","Type":"ContainerStarted","Data":"f3dc590369f22a9b903e4dce3d296083638b546d8244dd4ac41d68f272112feb"} Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.347616 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68fd445458-gvkkx" podStartSLOduration=23.347589939 podStartE2EDuration="23.347589939s" podCreationTimestamp="2025-11-28 11:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:19.341083139 +0000 UTC m=+1117.664326366" watchObservedRunningTime="2025-11-28 11:25:19.347589939 +0000 UTC m=+1117.670833166" Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.366865 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f99664784-xpqjq" podStartSLOduration=23.3668415 podStartE2EDuration="23.3668415s" podCreationTimestamp="2025-11-28 11:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:19.363088863 +0000 UTC m=+1117.686332090" watchObservedRunningTime="2025-11-28 11:25:19.3668415 +0000 UTC m=+1117.690084727" Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.735168 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gkxww" Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.912231 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c75297-c867-4c84-8183-239f47947895-combined-ca-bundle\") pod \"31c75297-c867-4c84-8183-239f47947895\" (UID: \"31c75297-c867-4c84-8183-239f47947895\") " Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.912310 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl7p2\" (UniqueName: \"kubernetes.io/projected/31c75297-c867-4c84-8183-239f47947895-kube-api-access-wl7p2\") pod \"31c75297-c867-4c84-8183-239f47947895\" (UID: \"31c75297-c867-4c84-8183-239f47947895\") " Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.912437 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31c75297-c867-4c84-8183-239f47947895-config\") pod \"31c75297-c867-4c84-8183-239f47947895\" (UID: \"31c75297-c867-4c84-8183-239f47947895\") " Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.936680 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c75297-c867-4c84-8183-239f47947895-kube-api-access-wl7p2" (OuterVolumeSpecName: "kube-api-access-wl7p2") pod "31c75297-c867-4c84-8183-239f47947895" (UID: "31c75297-c867-4c84-8183-239f47947895"). InnerVolumeSpecName "kube-api-access-wl7p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.947429 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31c75297-c867-4c84-8183-239f47947895-config" (OuterVolumeSpecName: "config") pod "31c75297-c867-4c84-8183-239f47947895" (UID: "31c75297-c867-4c84-8183-239f47947895"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:19 crc kubenswrapper[4772]: I1128 11:25:19.949951 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31c75297-c867-4c84-8183-239f47947895-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31c75297-c867-4c84-8183-239f47947895" (UID: "31c75297-c867-4c84-8183-239f47947895"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.014872 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c75297-c867-4c84-8183-239f47947895-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.014933 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl7p2\" (UniqueName: \"kubernetes.io/projected/31c75297-c867-4c84-8183-239f47947895-kube-api-access-wl7p2\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.014948 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/31c75297-c867-4c84-8183-239f47947895-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.388283 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211","Type":"ContainerStarted","Data":"c08587c649ab901237ef12fd2a7200bc78dc9fef2ee123b864a2db18290161cc"} Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.415266 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gkxww" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.415280 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gkxww" event={"ID":"31c75297-c867-4c84-8183-239f47947895","Type":"ContainerDied","Data":"03b7eea2f7440f8d4c28bde453734fbeb1b662d2be5459a77015ccc564208598"} Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.415337 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03b7eea2f7440f8d4c28bde453734fbeb1b662d2be5459a77015ccc564208598" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.419398 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.419385248 podStartE2EDuration="13.419385248s" podCreationTimestamp="2025-11-28 11:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:20.41790469 +0000 UTC m=+1118.741147917" watchObservedRunningTime="2025-11-28 11:25:20.419385248 +0000 UTC m=+1118.742628475" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.449935 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc9ee62a-cab5-4cac-afa0-18c776d8bab8","Type":"ContainerStarted","Data":"3ef314e38bd569ccddcb45b6dd790944a5a935910d9ae69d1381d79735b5df85"} Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.471734 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-4fv6l"] Nov 28 11:25:20 crc kubenswrapper[4772]: E1128 11:25:20.472534 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee784ed1-af35-4cee-93fc-23d12c813dc6" containerName="init" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.472635 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee784ed1-af35-4cee-93fc-23d12c813dc6" containerName="init" Nov 28 11:25:20 crc kubenswrapper[4772]: E1128 11:25:20.472731 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee784ed1-af35-4cee-93fc-23d12c813dc6" containerName="dnsmasq-dns" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.472803 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee784ed1-af35-4cee-93fc-23d12c813dc6" containerName="dnsmasq-dns" Nov 28 11:25:20 crc kubenswrapper[4772]: E1128 11:25:20.472876 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c75297-c867-4c84-8183-239f47947895" containerName="neutron-db-sync" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.472928 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c75297-c867-4c84-8183-239f47947895" containerName="neutron-db-sync" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.473193 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee784ed1-af35-4cee-93fc-23d12c813dc6" containerName="dnsmasq-dns" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.473275 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c75297-c867-4c84-8183-239f47947895" containerName="neutron-db-sync" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.474444 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.521678 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-4fv6l"] Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.578935 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-95fd65484-v9p98"] Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.593594 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-95fd65484-v9p98"] Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.593762 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.602397 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d5j4l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.602492 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.602510 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.603912 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.632546 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.632602 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-config\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.632887 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-dns-svc\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.632909 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.632941 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.632977 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjxgg\" (UniqueName: \"kubernetes.io/projected/52af4d7c-4347-47f2-8394-aeb9a51ae52f-kube-api-access-hjxgg\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.688678 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-2wtm9" podUID="ee784ed1-af35-4cee-93fc-23d12c813dc6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.734863 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-httpd-config\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.735151 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-config\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.735247 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.735671 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-config\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.735812 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j5bs\" (UniqueName: \"kubernetes.io/projected/3cb60a74-40d5-4c23-85ea-a5256ff13988-kube-api-access-4j5bs\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.736712 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.736937 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-config\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.737275 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-combined-ca-bundle\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.737460 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-ovndb-tls-certs\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.737606 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-dns-svc\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.737712 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.738649 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.738749 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjxgg\" (UniqueName: \"kubernetes.io/projected/52af4d7c-4347-47f2-8394-aeb9a51ae52f-kube-api-access-hjxgg\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.738504 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-dns-svc\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.738607 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.739983 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.761567 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjxgg\" (UniqueName: \"kubernetes.io/projected/52af4d7c-4347-47f2-8394-aeb9a51ae52f-kube-api-access-hjxgg\") pod \"dnsmasq-dns-6b7b667979-4fv6l\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.831752 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.841336 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-config\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.841449 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j5bs\" (UniqueName: \"kubernetes.io/projected/3cb60a74-40d5-4c23-85ea-a5256ff13988-kube-api-access-4j5bs\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.841529 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-combined-ca-bundle\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.841580 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-ovndb-tls-certs\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.841657 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-httpd-config\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.846845 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-combined-ca-bundle\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.847099 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-ovndb-tls-certs\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.848170 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-httpd-config\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.853536 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-config\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.865383 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j5bs\" (UniqueName: \"kubernetes.io/projected/3cb60a74-40d5-4c23-85ea-a5256ff13988-kube-api-access-4j5bs\") pod \"neutron-95fd65484-v9p98\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:20 crc kubenswrapper[4772]: I1128 11:25:20.940855 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:21 crc kubenswrapper[4772]: I1128 11:25:21.400193 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-4fv6l"] Nov 28 11:25:21 crc kubenswrapper[4772]: I1128 11:25:21.474073 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" event={"ID":"52af4d7c-4347-47f2-8394-aeb9a51ae52f","Type":"ContainerStarted","Data":"412f8d68464f66aed72ae7dba1080c5d0bde45bd102d561e6120b7ff81bca3f8"} Nov 28 11:25:21 crc kubenswrapper[4772]: I1128 11:25:21.723761 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-95fd65484-v9p98"] Nov 28 11:25:21 crc kubenswrapper[4772]: W1128 11:25:21.731713 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cb60a74_40d5_4c23_85ea_a5256ff13988.slice/crio-7dcc947fe3b84bc73b1b2dd266b3b1ac3daf6f811eb9d35796977e0496cb407a WatchSource:0}: Error finding container 7dcc947fe3b84bc73b1b2dd266b3b1ac3daf6f811eb9d35796977e0496cb407a: Status 404 returned error can't find the container with id 7dcc947fe3b84bc73b1b2dd266b3b1ac3daf6f811eb9d35796977e0496cb407a Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.489856 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc9ee62a-cab5-4cac-afa0-18c776d8bab8","Type":"ContainerStarted","Data":"ebf98bc0f3b316c0279cc09f0929c153b2826be708a628a1a7231be1621f255e"} Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.493619 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95fd65484-v9p98" event={"ID":"3cb60a74-40d5-4c23-85ea-a5256ff13988","Type":"ContainerStarted","Data":"7dcc947fe3b84bc73b1b2dd266b3b1ac3daf6f811eb9d35796977e0496cb407a"} Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.686807 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-758875cc6f-fmsqk"] Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.689495 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.698728 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.699018 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.708543 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-758875cc6f-fmsqk"] Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.795637 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-ovndb-tls-certs\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.795750 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-internal-tls-certs\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.795796 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-combined-ca-bundle\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.795832 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-public-tls-certs\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.795893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvmv4\" (UniqueName: \"kubernetes.io/projected/d251a047-3f90-4db3-8cae-b65b24395fdf-kube-api-access-gvmv4\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.795928 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-httpd-config\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.795965 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-config\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.898422 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvmv4\" (UniqueName: \"kubernetes.io/projected/d251a047-3f90-4db3-8cae-b65b24395fdf-kube-api-access-gvmv4\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.898520 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-httpd-config\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.898576 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-config\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.898652 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-ovndb-tls-certs\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.898703 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-internal-tls-certs\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.898754 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-combined-ca-bundle\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.898781 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-public-tls-certs\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.909198 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-internal-tls-certs\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.913901 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-config\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.916190 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvmv4\" (UniqueName: \"kubernetes.io/projected/d251a047-3f90-4db3-8cae-b65b24395fdf-kube-api-access-gvmv4\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.917854 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-httpd-config\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.919053 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-combined-ca-bundle\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.920132 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-public-tls-certs\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:22 crc kubenswrapper[4772]: I1128 11:25:22.926442 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d251a047-3f90-4db3-8cae-b65b24395fdf-ovndb-tls-certs\") pod \"neutron-758875cc6f-fmsqk\" (UID: \"d251a047-3f90-4db3-8cae-b65b24395fdf\") " pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:23 crc kubenswrapper[4772]: I1128 11:25:23.042446 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:23 crc kubenswrapper[4772]: I1128 11:25:23.522743 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" event={"ID":"52af4d7c-4347-47f2-8394-aeb9a51ae52f","Type":"ContainerStarted","Data":"1327b463916d37e0e51b8b204627f2bff357603e78789782a59e0c2c023b6881"} Nov 28 11:25:23 crc kubenswrapper[4772]: I1128 11:25:23.530577 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95fd65484-v9p98" event={"ID":"3cb60a74-40d5-4c23-85ea-a5256ff13988","Type":"ContainerStarted","Data":"f7fa60ab485700246fa8a955b4aec3806c70a46c74a47c0ecb93756a90fa078b"} Nov 28 11:25:23 crc kubenswrapper[4772]: I1128 11:25:23.558571 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=16.558551533 podStartE2EDuration="16.558551533s" podCreationTimestamp="2025-11-28 11:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:23.557593478 +0000 UTC m=+1121.880836705" watchObservedRunningTime="2025-11-28 11:25:23.558551533 +0000 UTC m=+1121.881794760" Nov 28 11:25:23 crc kubenswrapper[4772]: I1128 11:25:23.660243 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-758875cc6f-fmsqk"] Nov 28 11:25:24 crc kubenswrapper[4772]: I1128 11:25:24.541590 4772 generic.go:334] "Generic (PLEG): container finished" podID="52af4d7c-4347-47f2-8394-aeb9a51ae52f" containerID="1327b463916d37e0e51b8b204627f2bff357603e78789782a59e0c2c023b6881" exitCode=0 Nov 28 11:25:24 crc kubenswrapper[4772]: I1128 11:25:24.541701 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" event={"ID":"52af4d7c-4347-47f2-8394-aeb9a51ae52f","Type":"ContainerDied","Data":"1327b463916d37e0e51b8b204627f2bff357603e78789782a59e0c2c023b6881"} Nov 28 11:25:24 crc kubenswrapper[4772]: I1128 11:25:24.547014 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-758875cc6f-fmsqk" event={"ID":"d251a047-3f90-4db3-8cae-b65b24395fdf","Type":"ContainerStarted","Data":"c7eabd4d958bb0d6536ec081ea772854cfc88c15423139ab6b43cc0b1aea54c0"} Nov 28 11:25:24 crc kubenswrapper[4772]: I1128 11:25:24.547058 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-758875cc6f-fmsqk" event={"ID":"d251a047-3f90-4db3-8cae-b65b24395fdf","Type":"ContainerStarted","Data":"975f31dac74e5ce9e999ae4f12b30815ddc996386523a0fde4fce56bc78eac0a"} Nov 28 11:25:24 crc kubenswrapper[4772]: I1128 11:25:24.548734 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-krcf2" event={"ID":"3d1f86f7-529a-4ed1-885d-1beb4c14b213","Type":"ContainerStarted","Data":"75e39f58e849bce92ad3b13b58d906f6aa75950642f86a9f1f1db89e64d02e75"} Nov 28 11:25:24 crc kubenswrapper[4772]: I1128 11:25:24.607339 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-krcf2" podStartSLOduration=5.930407774 podStartE2EDuration="37.607308972s" podCreationTimestamp="2025-11-28 11:24:47 +0000 UTC" firstStartedPulling="2025-11-28 11:24:48.483658519 +0000 UTC m=+1086.806901746" lastFinishedPulling="2025-11-28 11:25:20.160559717 +0000 UTC m=+1118.483802944" observedRunningTime="2025-11-28 11:25:24.58418685 +0000 UTC m=+1122.907430077" watchObservedRunningTime="2025-11-28 11:25:24.607308972 +0000 UTC m=+1122.930552199" Nov 28 11:25:25 crc kubenswrapper[4772]: I1128 11:25:25.563660 4772 generic.go:334] "Generic (PLEG): container finished" podID="e46680be-1091-4b51-a858-2365af24a086" containerID="4c3f2ad0c3c4bc426a6476bf20847d23e2b08a70dcb35d573dc0c522ef0f15df" exitCode=0 Nov 28 11:25:25 crc kubenswrapper[4772]: I1128 11:25:25.563718 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-76nzr" event={"ID":"e46680be-1091-4b51-a858-2365af24a086","Type":"ContainerDied","Data":"4c3f2ad0c3c4bc426a6476bf20847d23e2b08a70dcb35d573dc0c522ef0f15df"} Nov 28 11:25:26 crc kubenswrapper[4772]: I1128 11:25:26.487078 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:25:26 crc kubenswrapper[4772]: I1128 11:25:26.487695 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:25:26 crc kubenswrapper[4772]: I1128 11:25:26.620464 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:25:26 crc kubenswrapper[4772]: I1128 11:25:26.620821 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:25:27 crc kubenswrapper[4772]: I1128 11:25:27.588769 4772 generic.go:334] "Generic (PLEG): container finished" podID="3d1f86f7-529a-4ed1-885d-1beb4c14b213" containerID="75e39f58e849bce92ad3b13b58d906f6aa75950642f86a9f1f1db89e64d02e75" exitCode=0 Nov 28 11:25:27 crc kubenswrapper[4772]: I1128 11:25:27.589228 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-krcf2" event={"ID":"3d1f86f7-529a-4ed1-885d-1beb4c14b213","Type":"ContainerDied","Data":"75e39f58e849bce92ad3b13b58d906f6aa75950642f86a9f1f1db89e64d02e75"} Nov 28 11:25:27 crc kubenswrapper[4772]: I1128 11:25:27.626573 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:25:27 crc kubenswrapper[4772]: I1128 11:25:27.665677 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 11:25:27 crc kubenswrapper[4772]: I1128 11:25:27.665738 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 11:25:27 crc kubenswrapper[4772]: I1128 11:25:27.705254 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 11:25:27 crc kubenswrapper[4772]: I1128 11:25:27.714330 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 11:25:27 crc kubenswrapper[4772]: I1128 11:25:27.909143 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:27 crc kubenswrapper[4772]: I1128 11:25:27.909235 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:27 crc kubenswrapper[4772]: I1128 11:25:27.947492 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:27 crc kubenswrapper[4772]: I1128 11:25:27.955590 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.609669 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-76nzr" event={"ID":"e46680be-1091-4b51-a858-2365af24a086","Type":"ContainerDied","Data":"68075868569e81d85e2d4d1da830cfe56c9b3636bb11251ce620d297977239fa"} Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.610048 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68075868569e81d85e2d4d1da830cfe56c9b3636bb11251ce620d297977239fa" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.612259 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.612341 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.612413 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.612482 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.627133 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.756457 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62pk6\" (UniqueName: \"kubernetes.io/projected/e46680be-1091-4b51-a858-2365af24a086-kube-api-access-62pk6\") pod \"e46680be-1091-4b51-a858-2365af24a086\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.756945 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-scripts\") pod \"e46680be-1091-4b51-a858-2365af24a086\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.757007 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-combined-ca-bundle\") pod \"e46680be-1091-4b51-a858-2365af24a086\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.757057 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-credential-keys\") pod \"e46680be-1091-4b51-a858-2365af24a086\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.757099 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-config-data\") pod \"e46680be-1091-4b51-a858-2365af24a086\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.757120 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-fernet-keys\") pod \"e46680be-1091-4b51-a858-2365af24a086\" (UID: \"e46680be-1091-4b51-a858-2365af24a086\") " Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.778059 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-scripts" (OuterVolumeSpecName: "scripts") pod "e46680be-1091-4b51-a858-2365af24a086" (UID: "e46680be-1091-4b51-a858-2365af24a086"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.790474 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e46680be-1091-4b51-a858-2365af24a086-kube-api-access-62pk6" (OuterVolumeSpecName: "kube-api-access-62pk6") pod "e46680be-1091-4b51-a858-2365af24a086" (UID: "e46680be-1091-4b51-a858-2365af24a086"). InnerVolumeSpecName "kube-api-access-62pk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.792579 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e46680be-1091-4b51-a858-2365af24a086" (UID: "e46680be-1091-4b51-a858-2365af24a086"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.811819 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e46680be-1091-4b51-a858-2365af24a086" (UID: "e46680be-1091-4b51-a858-2365af24a086"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.861899 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62pk6\" (UniqueName: \"kubernetes.io/projected/e46680be-1091-4b51-a858-2365af24a086-kube-api-access-62pk6\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.861956 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.861968 4772 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.861979 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.895874 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e46680be-1091-4b51-a858-2365af24a086" (UID: "e46680be-1091-4b51-a858-2365af24a086"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.919466 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-config-data" (OuterVolumeSpecName: "config-data") pod "e46680be-1091-4b51-a858-2365af24a086" (UID: "e46680be-1091-4b51-a858-2365af24a086"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.966748 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.966794 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e46680be-1091-4b51-a858-2365af24a086-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:28 crc kubenswrapper[4772]: I1128 11:25:28.981792 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-krcf2" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.071564 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-config-data\") pod \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.071935 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw2gr\" (UniqueName: \"kubernetes.io/projected/3d1f86f7-529a-4ed1-885d-1beb4c14b213-kube-api-access-jw2gr\") pod \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.071987 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d1f86f7-529a-4ed1-885d-1beb4c14b213-logs\") pod \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.072078 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-combined-ca-bundle\") pod \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.072131 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-scripts\") pod \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\" (UID: \"3d1f86f7-529a-4ed1-885d-1beb4c14b213\") " Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.072421 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d1f86f7-529a-4ed1-885d-1beb4c14b213-logs" (OuterVolumeSpecName: "logs") pod "3d1f86f7-529a-4ed1-885d-1beb4c14b213" (UID: "3d1f86f7-529a-4ed1-885d-1beb4c14b213"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.072828 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d1f86f7-529a-4ed1-885d-1beb4c14b213-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.079023 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d1f86f7-529a-4ed1-885d-1beb4c14b213-kube-api-access-jw2gr" (OuterVolumeSpecName: "kube-api-access-jw2gr") pod "3d1f86f7-529a-4ed1-885d-1beb4c14b213" (UID: "3d1f86f7-529a-4ed1-885d-1beb4c14b213"). InnerVolumeSpecName "kube-api-access-jw2gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.090170 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-scripts" (OuterVolumeSpecName: "scripts") pod "3d1f86f7-529a-4ed1-885d-1beb4c14b213" (UID: "3d1f86f7-529a-4ed1-885d-1beb4c14b213"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.123587 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-config-data" (OuterVolumeSpecName: "config-data") pod "3d1f86f7-529a-4ed1-885d-1beb4c14b213" (UID: "3d1f86f7-529a-4ed1-885d-1beb4c14b213"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.129306 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d1f86f7-529a-4ed1-885d-1beb4c14b213" (UID: "3d1f86f7-529a-4ed1-885d-1beb4c14b213"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.174915 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw2gr\" (UniqueName: \"kubernetes.io/projected/3d1f86f7-529a-4ed1-885d-1beb4c14b213-kube-api-access-jw2gr\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.174953 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.174963 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.174971 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1f86f7-529a-4ed1-885d-1beb4c14b213-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.621524 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95fd65484-v9p98" event={"ID":"3cb60a74-40d5-4c23-85ea-a5256ff13988","Type":"ContainerStarted","Data":"10dafc5ee45eeee82aabdba12b565d2afd6f4f07ee51479d3dbabd2919963abb"} Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.621739 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.624381 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8f47327-6cc4-4ca2-9363-0eca9d129686","Type":"ContainerStarted","Data":"4bcb03b424e57104d98bfa4dc924fa675a63cd3a42ac261d8970a464069ac549"} Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.626292 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" event={"ID":"52af4d7c-4347-47f2-8394-aeb9a51ae52f","Type":"ContainerStarted","Data":"d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4"} Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.626491 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.628197 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-758875cc6f-fmsqk" event={"ID":"d251a047-3f90-4db3-8cae-b65b24395fdf","Type":"ContainerStarted","Data":"dab63b977caedf64b78d4f79785f6d47393ad550a6773ed32a31dcfa376f5737"} Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.628325 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.631030 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-krcf2" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.631063 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-76nzr" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.631063 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-krcf2" event={"ID":"3d1f86f7-529a-4ed1-885d-1beb4c14b213","Type":"ContainerDied","Data":"1014aa2e197e4599bc5889bf3db8b5d4329149e25cca5ca127c9f20cd5f35f4d"} Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.631132 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1014aa2e197e4599bc5889bf3db8b5d4329149e25cca5ca127c9f20cd5f35f4d" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.704966 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-95fd65484-v9p98" podStartSLOduration=9.704940134 podStartE2EDuration="9.704940134s" podCreationTimestamp="2025-11-28 11:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:29.6725133 +0000 UTC m=+1127.995756537" watchObservedRunningTime="2025-11-28 11:25:29.704940134 +0000 UTC m=+1128.028183351" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.729386 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-758875cc6f-fmsqk" podStartSLOduration=7.72934273 podStartE2EDuration="7.72934273s" podCreationTimestamp="2025-11-28 11:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:29.714208066 +0000 UTC m=+1128.037451303" watchObservedRunningTime="2025-11-28 11:25:29.72934273 +0000 UTC m=+1128.052585947" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.773750 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-74fc54dcd4-9z4wp"] Nov 28 11:25:29 crc kubenswrapper[4772]: E1128 11:25:29.774624 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e46680be-1091-4b51-a858-2365af24a086" containerName="keystone-bootstrap" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.774636 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e46680be-1091-4b51-a858-2365af24a086" containerName="keystone-bootstrap" Nov 28 11:25:29 crc kubenswrapper[4772]: E1128 11:25:29.774673 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d1f86f7-529a-4ed1-885d-1beb4c14b213" containerName="placement-db-sync" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.774680 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d1f86f7-529a-4ed1-885d-1beb4c14b213" containerName="placement-db-sync" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.774882 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d1f86f7-529a-4ed1-885d-1beb4c14b213" containerName="placement-db-sync" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.774909 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e46680be-1091-4b51-a858-2365af24a086" containerName="keystone-bootstrap" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.776185 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.781957 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" podStartSLOduration=9.78193227 podStartE2EDuration="9.78193227s" podCreationTimestamp="2025-11-28 11:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:29.776726804 +0000 UTC m=+1128.099970041" watchObservedRunningTime="2025-11-28 11:25:29.78193227 +0000 UTC m=+1128.105175497" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.802826 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zw9g7" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.804229 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.804663 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.804959 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.825135 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.897614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-scripts\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.897964 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-logs\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.898094 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-combined-ca-bundle\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.898190 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-internal-tls-certs\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.898276 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-config-data\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.898396 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6ktc\" (UniqueName: \"kubernetes.io/projected/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-kube-api-access-z6ktc\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.898476 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-public-tls-certs\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.898650 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-56987d8b67-lwl5z"] Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.899995 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.906921 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.907235 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.907443 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.907671 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4j4h4" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.908292 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.908540 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.911962 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74fc54dcd4-9z4wp"] Nov 28 11:25:29 crc kubenswrapper[4772]: I1128 11:25:29.922019 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56987d8b67-lwl5z"] Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.000822 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-credential-keys\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.000909 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-public-tls-certs\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.000975 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxw4l\" (UniqueName: \"kubernetes.io/projected/473cc657-5696-4761-a692-e4929954d45b-kube-api-access-zxw4l\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.001009 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-scripts\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.001042 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-logs\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.001066 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-combined-ca-bundle\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.001087 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-fernet-keys\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.001116 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-combined-ca-bundle\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.001150 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-internal-tls-certs\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.001182 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-config-data\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.001199 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-scripts\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.001233 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-config-data\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.001264 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6ktc\" (UniqueName: \"kubernetes.io/projected/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-kube-api-access-z6ktc\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.001279 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-internal-tls-certs\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.001299 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-public-tls-certs\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.002289 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-logs\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.009684 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-config-data\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.013848 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-internal-tls-certs\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.015883 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-combined-ca-bundle\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.018578 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-scripts\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.028982 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-public-tls-certs\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.046991 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6ktc\" (UniqueName: \"kubernetes.io/projected/0c712f4c-0d11-4e33-a725-4a5ec8f62c5f-kube-api-access-z6ktc\") pod \"placement-74fc54dcd4-9z4wp\" (UID: \"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f\") " pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.103684 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-fernet-keys\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.103774 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-combined-ca-bundle\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.104545 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-scripts\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.104885 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-config-data\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.104924 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-internal-tls-certs\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.104947 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-credential-keys\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.104976 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-public-tls-certs\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.105052 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxw4l\" (UniqueName: \"kubernetes.io/projected/473cc657-5696-4761-a692-e4929954d45b-kube-api-access-zxw4l\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.118275 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-credential-keys\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.118995 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-scripts\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.119337 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-combined-ca-bundle\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.122047 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-fernet-keys\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.127086 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.183030 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-internal-tls-certs\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.184411 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-config-data\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.184634 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/473cc657-5696-4761-a692-e4929954d45b-public-tls-certs\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.185714 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxw4l\" (UniqueName: \"kubernetes.io/projected/473cc657-5696-4761-a692-e4929954d45b-kube-api-access-zxw4l\") pod \"keystone-56987d8b67-lwl5z\" (UID: \"473cc657-5696-4761-a692-e4929954d45b\") " pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.270088 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.677079 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.677124 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.767345 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74fc54dcd4-9z4wp"] Nov 28 11:25:30 crc kubenswrapper[4772]: I1128 11:25:30.876064 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56987d8b67-lwl5z"] Nov 28 11:25:31 crc kubenswrapper[4772]: I1128 11:25:31.566997 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 11:25:31 crc kubenswrapper[4772]: I1128 11:25:31.567976 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 11:25:31 crc kubenswrapper[4772]: I1128 11:25:31.568345 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 11:25:31 crc kubenswrapper[4772]: I1128 11:25:31.688140 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74fc54dcd4-9z4wp" event={"ID":"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f","Type":"ContainerStarted","Data":"e30c642aff6804c21941aea7d83689b4cd310d7883ebd9b088ea5c93617fb56b"} Nov 28 11:25:31 crc kubenswrapper[4772]: I1128 11:25:31.688193 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74fc54dcd4-9z4wp" event={"ID":"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f","Type":"ContainerStarted","Data":"87d1f984033f993da7514a2455b7f6c77e72bb93253f82c3a5e0aae7c7801283"} Nov 28 11:25:31 crc kubenswrapper[4772]: I1128 11:25:31.690984 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56987d8b67-lwl5z" event={"ID":"473cc657-5696-4761-a692-e4929954d45b","Type":"ContainerStarted","Data":"8a87570ea4311c03960de77b79942a55367b5255ee61993a2d5f17a71963e510"} Nov 28 11:25:31 crc kubenswrapper[4772]: I1128 11:25:31.730041 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:31 crc kubenswrapper[4772]: I1128 11:25:31.730160 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 11:25:31 crc kubenswrapper[4772]: I1128 11:25:31.987701 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 11:25:32 crc kubenswrapper[4772]: I1128 11:25:32.706163 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56987d8b67-lwl5z" event={"ID":"473cc657-5696-4761-a692-e4929954d45b","Type":"ContainerStarted","Data":"6dd257bc0b68f10919fa00d592e6ae360057157eecfd25812e2fe60108e7aa3f"} Nov 28 11:25:32 crc kubenswrapper[4772]: I1128 11:25:32.708009 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:25:32 crc kubenswrapper[4772]: I1128 11:25:32.720027 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74fc54dcd4-9z4wp" event={"ID":"0c712f4c-0d11-4e33-a725-4a5ec8f62c5f","Type":"ContainerStarted","Data":"6d16931fe4c39e41649703df1864e03e721be799d8cd41aff656de405a1b331c"} Nov 28 11:25:32 crc kubenswrapper[4772]: I1128 11:25:32.747296 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-56987d8b67-lwl5z" podStartSLOduration=3.747265606 podStartE2EDuration="3.747265606s" podCreationTimestamp="2025-11-28 11:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:32.732303296 +0000 UTC m=+1131.055546533" watchObservedRunningTime="2025-11-28 11:25:32.747265606 +0000 UTC m=+1131.070508843" Nov 28 11:25:32 crc kubenswrapper[4772]: I1128 11:25:32.773829 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-74fc54dcd4-9z4wp" podStartSLOduration=3.773803497 podStartE2EDuration="3.773803497s" podCreationTimestamp="2025-11-28 11:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:32.766101726 +0000 UTC m=+1131.089344973" watchObservedRunningTime="2025-11-28 11:25:32.773803497 +0000 UTC m=+1131.097046724" Nov 28 11:25:33 crc kubenswrapper[4772]: I1128 11:25:33.730531 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j9ck9" event={"ID":"59e2ef4f-2f84-4b24-9a30-27fea471faf5","Type":"ContainerStarted","Data":"97f04da42ccc37d18ada52c2535292efc67ed9c75175f717c689312537f1ee5a"} Nov 28 11:25:33 crc kubenswrapper[4772]: I1128 11:25:33.738254 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-72xt2" event={"ID":"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe","Type":"ContainerStarted","Data":"1b14c4c8c701e2c3c64864f1646a6c347c5d3a9e2d758e143dcc7ac3fffaa706"} Nov 28 11:25:33 crc kubenswrapper[4772]: I1128 11:25:33.739375 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:33 crc kubenswrapper[4772]: I1128 11:25:33.739414 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:25:33 crc kubenswrapper[4772]: I1128 11:25:33.749146 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-j9ck9" podStartSLOduration=3.475799902 podStartE2EDuration="46.749127714s" podCreationTimestamp="2025-11-28 11:24:47 +0000 UTC" firstStartedPulling="2025-11-28 11:24:49.033241954 +0000 UTC m=+1087.356485181" lastFinishedPulling="2025-11-28 11:25:32.306569766 +0000 UTC m=+1130.629812993" observedRunningTime="2025-11-28 11:25:33.744584666 +0000 UTC m=+1132.067827893" watchObservedRunningTime="2025-11-28 11:25:33.749127714 +0000 UTC m=+1132.072370941" Nov 28 11:25:33 crc kubenswrapper[4772]: I1128 11:25:33.774080 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-72xt2" podStartSLOduration=3.727193713 podStartE2EDuration="47.774053213s" podCreationTimestamp="2025-11-28 11:24:46 +0000 UTC" firstStartedPulling="2025-11-28 11:24:48.484011129 +0000 UTC m=+1086.807254346" lastFinishedPulling="2025-11-28 11:25:32.530870619 +0000 UTC m=+1130.854113846" observedRunningTime="2025-11-28 11:25:33.765937002 +0000 UTC m=+1132.089180229" watchObservedRunningTime="2025-11-28 11:25:33.774053213 +0000 UTC m=+1132.097296440" Nov 28 11:25:35 crc kubenswrapper[4772]: I1128 11:25:35.834510 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:35 crc kubenswrapper[4772]: I1128 11:25:35.906934 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wnq4f"] Nov 28 11:25:35 crc kubenswrapper[4772]: I1128 11:25:35.908502 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" podUID="0283d234-d683-44b1-8e41-71fef0c61b16" containerName="dnsmasq-dns" containerID="cri-o://4b113a5a5fe573e7e8954abf02cea36ea05635597c728e25f732bb8c84072813" gracePeriod=10 Nov 28 11:25:36 crc kubenswrapper[4772]: I1128 11:25:36.488697 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68fd445458-gvkkx" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 28 11:25:36 crc kubenswrapper[4772]: I1128 11:25:36.625422 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f99664784-xpqjq" podUID="3a9ada7a-c788-41ad-87a6-431ba8c94394" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Nov 28 11:25:36 crc kubenswrapper[4772]: I1128 11:25:36.772609 4772 generic.go:334] "Generic (PLEG): container finished" podID="0283d234-d683-44b1-8e41-71fef0c61b16" containerID="4b113a5a5fe573e7e8954abf02cea36ea05635597c728e25f732bb8c84072813" exitCode=0 Nov 28 11:25:36 crc kubenswrapper[4772]: I1128 11:25:36.772701 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" event={"ID":"0283d234-d683-44b1-8e41-71fef0c61b16","Type":"ContainerDied","Data":"4b113a5a5fe573e7e8954abf02cea36ea05635597c728e25f732bb8c84072813"} Nov 28 11:25:36 crc kubenswrapper[4772]: I1128 11:25:36.774600 4772 generic.go:334] "Generic (PLEG): container finished" podID="59e2ef4f-2f84-4b24-9a30-27fea471faf5" containerID="97f04da42ccc37d18ada52c2535292efc67ed9c75175f717c689312537f1ee5a" exitCode=0 Nov 28 11:25:36 crc kubenswrapper[4772]: I1128 11:25:36.774650 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j9ck9" event={"ID":"59e2ef4f-2f84-4b24-9a30-27fea471faf5","Type":"ContainerDied","Data":"97f04da42ccc37d18ada52c2535292efc67ed9c75175f717c689312537f1ee5a"} Nov 28 11:25:38 crc kubenswrapper[4772]: I1128 11:25:38.271722 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" podUID="0283d234-d683-44b1-8e41-71fef0c61b16" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.335349 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.477237 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shjcd\" (UniqueName: \"kubernetes.io/projected/59e2ef4f-2f84-4b24-9a30-27fea471faf5-kube-api-access-shjcd\") pod \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\" (UID: \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\") " Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.477318 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e2ef4f-2f84-4b24-9a30-27fea471faf5-combined-ca-bundle\") pod \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\" (UID: \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\") " Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.477491 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59e2ef4f-2f84-4b24-9a30-27fea471faf5-db-sync-config-data\") pod \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\" (UID: \"59e2ef4f-2f84-4b24-9a30-27fea471faf5\") " Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.546839 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e2ef4f-2f84-4b24-9a30-27fea471faf5-kube-api-access-shjcd" (OuterVolumeSpecName: "kube-api-access-shjcd") pod "59e2ef4f-2f84-4b24-9a30-27fea471faf5" (UID: "59e2ef4f-2f84-4b24-9a30-27fea471faf5"). InnerVolumeSpecName "kube-api-access-shjcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.559637 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e2ef4f-2f84-4b24-9a30-27fea471faf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59e2ef4f-2f84-4b24-9a30-27fea471faf5" (UID: "59e2ef4f-2f84-4b24-9a30-27fea471faf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.566860 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e2ef4f-2f84-4b24-9a30-27fea471faf5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "59e2ef4f-2f84-4b24-9a30-27fea471faf5" (UID: "59e2ef4f-2f84-4b24-9a30-27fea471faf5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.580773 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shjcd\" (UniqueName: \"kubernetes.io/projected/59e2ef4f-2f84-4b24-9a30-27fea471faf5-kube-api-access-shjcd\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.580814 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e2ef4f-2f84-4b24-9a30-27fea471faf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.580826 4772 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59e2ef4f-2f84-4b24-9a30-27fea471faf5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.810593 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j9ck9" event={"ID":"59e2ef4f-2f84-4b24-9a30-27fea471faf5","Type":"ContainerDied","Data":"c1ce997143cb9ab18be0d175f91ac4cf6846976f0856926ba6befc0758f9aaa7"} Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.811003 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1ce997143cb9ab18be0d175f91ac4cf6846976f0856926ba6befc0758f9aaa7" Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.810673 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j9ck9" Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.815993 4772 generic.go:334] "Generic (PLEG): container finished" podID="2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" containerID="1b14c4c8c701e2c3c64864f1646a6c347c5d3a9e2d758e143dcc7ac3fffaa706" exitCode=0 Nov 28 11:25:39 crc kubenswrapper[4772]: I1128 11:25:39.816059 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-72xt2" event={"ID":"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe","Type":"ContainerDied","Data":"1b14c4c8c701e2c3c64864f1646a6c347c5d3a9e2d758e143dcc7ac3fffaa706"} Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.670586 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-86946c9dff-9km2x"] Nov 28 11:25:40 crc kubenswrapper[4772]: E1128 11:25:40.677033 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e2ef4f-2f84-4b24-9a30-27fea471faf5" containerName="barbican-db-sync" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.677094 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e2ef4f-2f84-4b24-9a30-27fea471faf5" containerName="barbican-db-sync" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.677538 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e2ef4f-2f84-4b24-9a30-27fea471faf5" containerName="barbican-db-sync" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.679699 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.688423 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.688733 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-54jnx" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.689284 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.690542 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-86946c9dff-9km2x"] Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.713585 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-bd5f4f5b6-n56r2"] Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.715454 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.724269 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.733846 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-bd5f4f5b6-n56r2"] Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.789551 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-h4cfz"] Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.791828 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.813014 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c591ea97-2d66-45fe-85e2-1c22c6af8218-logs\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.813050 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f72d166-9d59-443d-9af2-3d93c158ef98-combined-ca-bundle\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.813108 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c591ea97-2d66-45fe-85e2-1c22c6af8218-config-data\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.813166 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f72d166-9d59-443d-9af2-3d93c158ef98-logs\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.813191 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scjcc\" (UniqueName: \"kubernetes.io/projected/c591ea97-2d66-45fe-85e2-1c22c6af8218-kube-api-access-scjcc\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.813212 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c591ea97-2d66-45fe-85e2-1c22c6af8218-combined-ca-bundle\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.813230 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtsgk\" (UniqueName: \"kubernetes.io/projected/8f72d166-9d59-443d-9af2-3d93c158ef98-kube-api-access-xtsgk\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.813255 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f72d166-9d59-443d-9af2-3d93c158ef98-config-data-custom\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.813296 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f72d166-9d59-443d-9af2-3d93c158ef98-config-data\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.813324 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c591ea97-2d66-45fe-85e2-1c22c6af8218-config-data-custom\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.867121 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-h4cfz"] Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.891898 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-9c587cc74-7vk5q"] Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.893893 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.898165 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.918474 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9c587cc74-7vk5q"] Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924158 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c591ea97-2d66-45fe-85e2-1c22c6af8218-logs\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924194 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f72d166-9d59-443d-9af2-3d93c158ef98-combined-ca-bundle\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924241 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924263 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924620 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c591ea97-2d66-45fe-85e2-1c22c6af8218-config-data\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924665 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924702 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6bmg\" (UniqueName: \"kubernetes.io/projected/f625b657-d0ba-4597-8f08-93e1bca9faf5-kube-api-access-c6bmg\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924800 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f72d166-9d59-443d-9af2-3d93c158ef98-logs\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924830 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scjcc\" (UniqueName: \"kubernetes.io/projected/c591ea97-2d66-45fe-85e2-1c22c6af8218-kube-api-access-scjcc\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924860 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c591ea97-2d66-45fe-85e2-1c22c6af8218-combined-ca-bundle\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924877 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-config\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924897 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtsgk\" (UniqueName: \"kubernetes.io/projected/8f72d166-9d59-443d-9af2-3d93c158ef98-kube-api-access-xtsgk\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924922 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924939 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f72d166-9d59-443d-9af2-3d93c158ef98-config-data-custom\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.924975 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f72d166-9d59-443d-9af2-3d93c158ef98-config-data\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.925008 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c591ea97-2d66-45fe-85e2-1c22c6af8218-config-data-custom\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.925654 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c591ea97-2d66-45fe-85e2-1c22c6af8218-logs\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.926979 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f72d166-9d59-443d-9af2-3d93c158ef98-logs\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.938947 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f72d166-9d59-443d-9af2-3d93c158ef98-config-data\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.945944 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f72d166-9d59-443d-9af2-3d93c158ef98-config-data-custom\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.946886 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f72d166-9d59-443d-9af2-3d93c158ef98-combined-ca-bundle\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.947953 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c591ea97-2d66-45fe-85e2-1c22c6af8218-combined-ca-bundle\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.951176 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c591ea97-2d66-45fe-85e2-1c22c6af8218-config-data\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.964029 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scjcc\" (UniqueName: \"kubernetes.io/projected/c591ea97-2d66-45fe-85e2-1c22c6af8218-kube-api-access-scjcc\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.981655 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtsgk\" (UniqueName: \"kubernetes.io/projected/8f72d166-9d59-443d-9af2-3d93c158ef98-kube-api-access-xtsgk\") pod \"barbican-worker-86946c9dff-9km2x\" (UID: \"8f72d166-9d59-443d-9af2-3d93c158ef98\") " pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:40 crc kubenswrapper[4772]: I1128 11:25:40.998940 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c591ea97-2d66-45fe-85e2-1c22c6af8218-config-data-custom\") pod \"barbican-keystone-listener-bd5f4f5b6-n56r2\" (UID: \"c591ea97-2d66-45fe-85e2-1c22c6af8218\") " pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.029095 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.029157 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl4ts\" (UniqueName: \"kubernetes.io/projected/dd73003f-bfe8-434c-a400-7d03fbd08d2f-kube-api-access-cl4ts\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.029182 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.029216 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-config-data-custom\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.029248 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.029268 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-config-data\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.029287 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd73003f-bfe8-434c-a400-7d03fbd08d2f-logs\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.029314 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6bmg\" (UniqueName: \"kubernetes.io/projected/f625b657-d0ba-4597-8f08-93e1bca9faf5-kube-api-access-c6bmg\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.029350 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-config\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.029411 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.029460 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-combined-ca-bundle\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.030380 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.030906 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.042271 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.043981 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-config\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.044888 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.058044 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-86946c9dff-9km2x" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.059461 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6bmg\" (UniqueName: \"kubernetes.io/projected/f625b657-d0ba-4597-8f08-93e1bca9faf5-kube-api-access-c6bmg\") pod \"dnsmasq-dns-848cf88cfc-h4cfz\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.084427 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.131865 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-config-data\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.131947 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd73003f-bfe8-434c-a400-7d03fbd08d2f-logs\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.132083 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-combined-ca-bundle\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.132138 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl4ts\" (UniqueName: \"kubernetes.io/projected/dd73003f-bfe8-434c-a400-7d03fbd08d2f-kube-api-access-cl4ts\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.132188 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-config-data-custom\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.133154 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd73003f-bfe8-434c-a400-7d03fbd08d2f-logs\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.140186 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-combined-ca-bundle\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.140551 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-config-data\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.154870 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-config-data-custom\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.157687 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl4ts\" (UniqueName: \"kubernetes.io/projected/dd73003f-bfe8-434c-a400-7d03fbd08d2f-kube-api-access-cl4ts\") pod \"barbican-api-9c587cc74-7vk5q\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.174634 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:41 crc kubenswrapper[4772]: I1128 11:25:41.355970 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.201503 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.213431 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-72xt2" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.288591 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-dns-svc\") pod \"0283d234-d683-44b1-8e41-71fef0c61b16\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.288694 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-ovsdbserver-sb\") pod \"0283d234-d683-44b1-8e41-71fef0c61b16\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.288858 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-etc-machine-id\") pod \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.288958 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-scripts\") pod \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.289041 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-combined-ca-bundle\") pod \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.289088 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-ovsdbserver-nb\") pod \"0283d234-d683-44b1-8e41-71fef0c61b16\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.289142 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-db-sync-config-data\") pod \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.289196 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-config-data\") pod \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.289240 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f9lf\" (UniqueName: \"kubernetes.io/projected/0283d234-d683-44b1-8e41-71fef0c61b16-kube-api-access-5f9lf\") pod \"0283d234-d683-44b1-8e41-71fef0c61b16\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.289302 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwclr\" (UniqueName: \"kubernetes.io/projected/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-kube-api-access-gwclr\") pod \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\" (UID: \"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe\") " Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.289343 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-config\") pod \"0283d234-d683-44b1-8e41-71fef0c61b16\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.289448 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-dns-swift-storage-0\") pod \"0283d234-d683-44b1-8e41-71fef0c61b16\" (UID: \"0283d234-d683-44b1-8e41-71fef0c61b16\") " Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.300762 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0283d234-d683-44b1-8e41-71fef0c61b16-kube-api-access-5f9lf" (OuterVolumeSpecName: "kube-api-access-5f9lf") pod "0283d234-d683-44b1-8e41-71fef0c61b16" (UID: "0283d234-d683-44b1-8e41-71fef0c61b16"). InnerVolumeSpecName "kube-api-access-5f9lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.301114 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f9lf\" (UniqueName: \"kubernetes.io/projected/0283d234-d683-44b1-8e41-71fef0c61b16-kube-api-access-5f9lf\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.314263 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" (UID: "2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.333793 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" (UID: "2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.335264 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-kube-api-access-gwclr" (OuterVolumeSpecName: "kube-api-access-gwclr") pod "2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" (UID: "2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe"). InnerVolumeSpecName "kube-api-access-gwclr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.370794 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-scripts" (OuterVolumeSpecName: "scripts") pod "2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" (UID: "2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.404561 4772 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.404591 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.404601 4772 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.404614 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwclr\" (UniqueName: \"kubernetes.io/projected/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-kube-api-access-gwclr\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.441950 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" (UID: "2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.455496 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-config-data" (OuterVolumeSpecName: "config-data") pod "2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" (UID: "2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.461848 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-config" (OuterVolumeSpecName: "config") pod "0283d234-d683-44b1-8e41-71fef0c61b16" (UID: "0283d234-d683-44b1-8e41-71fef0c61b16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.477927 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0283d234-d683-44b1-8e41-71fef0c61b16" (UID: "0283d234-d683-44b1-8e41-71fef0c61b16"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.488971 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0283d234-d683-44b1-8e41-71fef0c61b16" (UID: "0283d234-d683-44b1-8e41-71fef0c61b16"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.500004 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0283d234-d683-44b1-8e41-71fef0c61b16" (UID: "0283d234-d683-44b1-8e41-71fef0c61b16"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.507025 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.507211 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.507283 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.507342 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.507416 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.507472 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.528875 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0283d234-d683-44b1-8e41-71fef0c61b16" (UID: "0283d234-d683-44b1-8e41-71fef0c61b16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.609175 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0283d234-d683-44b1-8e41-71fef0c61b16-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:42 crc kubenswrapper[4772]: E1128 11:25:42.712465 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.900632 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerName="ceilometer-notification-agent" containerID="cri-o://b6e99f5d4c66fece3ff14fcba2bc904f668bfa1de22030db8081c41b707b74fb" gracePeriod=30 Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.900746 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8f47327-6cc4-4ca2-9363-0eca9d129686","Type":"ContainerStarted","Data":"84ed15146652d9a3045c02270ec4c57a8ff32278121bf02137e9f63b69755875"} Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.900810 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.901187 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerName="proxy-httpd" containerID="cri-o://84ed15146652d9a3045c02270ec4c57a8ff32278121bf02137e9f63b69755875" gracePeriod=30 Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.901241 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerName="sg-core" containerID="cri-o://4bcb03b424e57104d98bfa4dc924fa675a63cd3a42ac261d8970a464069ac549" gracePeriod=30 Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.920109 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-86946c9dff-9km2x"] Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.920791 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.921317 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wnq4f" event={"ID":"0283d234-d683-44b1-8e41-71fef0c61b16","Type":"ContainerDied","Data":"a3a58bfa36d2827b9e458ca8b8984a2966d400c63ae340a2cd9529be4a444124"} Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.921405 4772 scope.go:117] "RemoveContainer" containerID="4b113a5a5fe573e7e8954abf02cea36ea05635597c728e25f732bb8c84072813" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.930629 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-72xt2" event={"ID":"2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe","Type":"ContainerDied","Data":"2f84de32a7ec32e70c44df724483c6b8911aca5556c34e4e59c152f25f8d989e"} Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.930669 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f84de32a7ec32e70c44df724483c6b8911aca5556c34e4e59c152f25f8d989e" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.930757 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-72xt2" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.955318 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-bd5f4f5b6-n56r2"] Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.989654 4772 scope.go:117] "RemoveContainer" containerID="5f39cf82eb73d431608d167b1731cb89f219694dbb8f97e35aed6537d64554f5" Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.991409 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wnq4f"] Nov 28 11:25:42 crc kubenswrapper[4772]: I1128 11:25:42.998972 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wnq4f"] Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.118132 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-h4cfz"] Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.137748 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9c587cc74-7vk5q"] Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.573123 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 11:25:43 crc kubenswrapper[4772]: E1128 11:25:43.574720 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0283d234-d683-44b1-8e41-71fef0c61b16" containerName="init" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.574736 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0283d234-d683-44b1-8e41-71fef0c61b16" containerName="init" Nov 28 11:25:43 crc kubenswrapper[4772]: E1128 11:25:43.574769 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0283d234-d683-44b1-8e41-71fef0c61b16" containerName="dnsmasq-dns" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.574775 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0283d234-d683-44b1-8e41-71fef0c61b16" containerName="dnsmasq-dns" Nov 28 11:25:43 crc kubenswrapper[4772]: E1128 11:25:43.574800 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" containerName="cinder-db-sync" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.574806 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" containerName="cinder-db-sync" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.574982 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0283d234-d683-44b1-8e41-71fef0c61b16" containerName="dnsmasq-dns" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.575010 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" containerName="cinder-db-sync" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.579190 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.581957 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-swb64" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.583654 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.583948 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.584228 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.613409 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.651632 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-scripts\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.651693 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/529e270c-458a-45cf-bb5e-f2aecfa83b27-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.651804 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.651839 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.651875 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-config-data\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.651892 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmvdk\" (UniqueName: \"kubernetes.io/projected/529e270c-458a-45cf-bb5e-f2aecfa83b27-kube-api-access-rmvdk\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.686136 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-h4cfz"] Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.721512 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pv7n6"] Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.723288 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.760052 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.760113 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.760158 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-config-data\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.760178 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmvdk\" (UniqueName: \"kubernetes.io/projected/529e270c-458a-45cf-bb5e-f2aecfa83b27-kube-api-access-rmvdk\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.760226 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-scripts\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.760251 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/529e270c-458a-45cf-bb5e-f2aecfa83b27-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.760340 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/529e270c-458a-45cf-bb5e-f2aecfa83b27-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.798790 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pv7n6"] Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.862792 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sf6q\" (UniqueName: \"kubernetes.io/projected/923ff71f-a546-41a6-a825-d01d907c7763-kube-api-access-8sf6q\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.862876 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.862953 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-config\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.862975 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-dns-svc\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.863030 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.863052 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.965722 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sf6q\" (UniqueName: \"kubernetes.io/projected/923ff71f-a546-41a6-a825-d01d907c7763-kube-api-access-8sf6q\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.965809 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.965891 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-config\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.965915 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-dns-svc\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.965976 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.966009 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.967009 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.967979 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.969335 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-config\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.969434 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-dns-svc\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.969727 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.975346 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.977435 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-config-data\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.980628 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-scripts\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.980976 4772 generic.go:334] "Generic (PLEG): container finished" podID="f625b657-d0ba-4597-8f08-93e1bca9faf5" containerID="e8af265e22a0444442ab4cf07c32899df96b54d5a86c7096ce5a440d1400eb24" exitCode=0 Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.981014 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" event={"ID":"f625b657-d0ba-4597-8f08-93e1bca9faf5","Type":"ContainerDied","Data":"e8af265e22a0444442ab4cf07c32899df96b54d5a86c7096ce5a440d1400eb24"} Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.984995 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" event={"ID":"f625b657-d0ba-4597-8f08-93e1bca9faf5","Type":"ContainerStarted","Data":"0bbb021d9d97c3e9db58aa3372613d353b7fc351ae74cb2c51810413bcfc4957"} Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.986764 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmvdk\" (UniqueName: \"kubernetes.io/projected/529e270c-458a-45cf-bb5e-f2aecfa83b27-kube-api-access-rmvdk\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:43 crc kubenswrapper[4772]: I1128 11:25:43.998997 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " pod="openstack/cinder-scheduler-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.010483 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sf6q\" (UniqueName: \"kubernetes.io/projected/923ff71f-a546-41a6-a825-d01d907c7763-kube-api-access-8sf6q\") pod \"dnsmasq-dns-6578955fd5-pv7n6\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.041475 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0283d234-d683-44b1-8e41-71fef0c61b16" path="/var/lib/kubelet/pods/0283d234-d683-44b1-8e41-71fef0c61b16/volumes" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.042163 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86946c9dff-9km2x" event={"ID":"8f72d166-9d59-443d-9af2-3d93c158ef98","Type":"ContainerStarted","Data":"b9249ed7f64c14984a774f0a2e35acec906b98a9fe78fcba1f20ef9856d8cf7a"} Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.042200 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.058043 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" event={"ID":"c591ea97-2d66-45fe-85e2-1c22c6af8218","Type":"ContainerStarted","Data":"fde7d7661107297e83631c95cbcb2e800196473a80ff8b4a71bd1560e0410d23"} Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.058187 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.062613 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.063150 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.078868 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.144752 4772 generic.go:334] "Generic (PLEG): container finished" podID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerID="84ed15146652d9a3045c02270ec4c57a8ff32278121bf02137e9f63b69755875" exitCode=0 Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.144799 4772 generic.go:334] "Generic (PLEG): container finished" podID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerID="4bcb03b424e57104d98bfa4dc924fa675a63cd3a42ac261d8970a464069ac549" exitCode=2 Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.145050 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8f47327-6cc4-4ca2-9363-0eca9d129686","Type":"ContainerDied","Data":"84ed15146652d9a3045c02270ec4c57a8ff32278121bf02137e9f63b69755875"} Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.145083 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8f47327-6cc4-4ca2-9363-0eca9d129686","Type":"ContainerDied","Data":"4bcb03b424e57104d98bfa4dc924fa675a63cd3a42ac261d8970a464069ac549"} Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.162445 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9c587cc74-7vk5q" event={"ID":"dd73003f-bfe8-434c-a400-7d03fbd08d2f","Type":"ContainerStarted","Data":"471d4641727ce30c311b0515c91ce0acd3b44af77ea44a7e0c2c094f8f400d2e"} Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.162514 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9c587cc74-7vk5q" event={"ID":"dd73003f-bfe8-434c-a400-7d03fbd08d2f","Type":"ContainerStarted","Data":"f4d1a4bdbc4fb47a453363b63cbff0b60b9974b03e1d949ef261407eaad73814"} Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.221326 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.246686 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-config-data-custom\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.247083 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.247107 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3decec05-c99c-4a71-9e7f-d98a8d52e92f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.247177 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3decec05-c99c-4a71-9e7f-d98a8d52e92f-logs\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.247218 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7v8x\" (UniqueName: \"kubernetes.io/projected/3decec05-c99c-4a71-9e7f-d98a8d52e92f-kube-api-access-m7v8x\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.247246 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-scripts\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.247263 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-config-data\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.352624 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7v8x\" (UniqueName: \"kubernetes.io/projected/3decec05-c99c-4a71-9e7f-d98a8d52e92f-kube-api-access-m7v8x\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.352697 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-scripts\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.352736 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-config-data\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.352775 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-config-data-custom\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.352816 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.352836 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3decec05-c99c-4a71-9e7f-d98a8d52e92f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.352898 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3decec05-c99c-4a71-9e7f-d98a8d52e92f-logs\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.353328 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3decec05-c99c-4a71-9e7f-d98a8d52e92f-logs\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.355742 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3decec05-c99c-4a71-9e7f-d98a8d52e92f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.360177 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.360424 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-scripts\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.361530 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-config-data-custom\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.363124 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-config-data\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.376900 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7v8x\" (UniqueName: \"kubernetes.io/projected/3decec05-c99c-4a71-9e7f-d98a8d52e92f-kube-api-access-m7v8x\") pod \"cinder-api-0\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: E1128 11:25:44.511560 4772 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 28 11:25:44 crc kubenswrapper[4772]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/f625b657-d0ba-4597-8f08-93e1bca9faf5/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 28 11:25:44 crc kubenswrapper[4772]: > podSandboxID="0bbb021d9d97c3e9db58aa3372613d353b7fc351ae74cb2c51810413bcfc4957" Nov 28 11:25:44 crc kubenswrapper[4772]: E1128 11:25:44.511792 4772 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 28 11:25:44 crc kubenswrapper[4772]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n99h8bhd9h696h649h588h5c6h658h5b4h57fh65h89h5f5h56h696h5dh8h57h597h68ch568h58dh66hf4h675h598h588h67dhb5h69h5dh6bq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6bmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-848cf88cfc-h4cfz_openstack(f625b657-d0ba-4597-8f08-93e1bca9faf5): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/f625b657-d0ba-4597-8f08-93e1bca9faf5/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 28 11:25:44 crc kubenswrapper[4772]: > logger="UnhandledError" Nov 28 11:25:44 crc kubenswrapper[4772]: E1128 11:25:44.513586 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/f625b657-d0ba-4597-8f08-93e1bca9faf5/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" podUID="f625b657-d0ba-4597-8f08-93e1bca9faf5" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.637828 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.964266 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pv7n6"] Nov 28 11:25:44 crc kubenswrapper[4772]: I1128 11:25:44.978409 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 11:25:45 crc kubenswrapper[4772]: I1128 11:25:45.064510 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 11:25:45 crc kubenswrapper[4772]: I1128 11:25:45.180200 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3decec05-c99c-4a71-9e7f-d98a8d52e92f","Type":"ContainerStarted","Data":"f85d53dd56302a42f86783b7070856605c65fe9dacfd63ce887254079b3d1777"} Nov 28 11:25:45 crc kubenswrapper[4772]: I1128 11:25:45.182111 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"529e270c-458a-45cf-bb5e-f2aecfa83b27","Type":"ContainerStarted","Data":"b231d574e9fc92ff4b452700f7d3041808207fc722389aab19728211ebf20eb2"} Nov 28 11:25:45 crc kubenswrapper[4772]: I1128 11:25:45.183686 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" event={"ID":"923ff71f-a546-41a6-a825-d01d907c7763","Type":"ContainerStarted","Data":"f791b60d91d14da7405b2bc912fa62ce8934a22816658a71c6fa0f7bee45a335"} Nov 28 11:25:45 crc kubenswrapper[4772]: I1128 11:25:45.186274 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9c587cc74-7vk5q" event={"ID":"dd73003f-bfe8-434c-a400-7d03fbd08d2f","Type":"ContainerStarted","Data":"b93ccb2edc1836e9818c9445fe9e47601947eadf07f7f46f8149c62f9e8c595a"} Nov 28 11:25:45 crc kubenswrapper[4772]: I1128 11:25:45.229605 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-9c587cc74-7vk5q" podStartSLOduration=5.229575196 podStartE2EDuration="5.229575196s" podCreationTimestamp="2025-11-28 11:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:45.2178276 +0000 UTC m=+1143.541070847" watchObservedRunningTime="2025-11-28 11:25:45.229575196 +0000 UTC m=+1143.552818423" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.197033 4772 generic.go:334] "Generic (PLEG): container finished" podID="923ff71f-a546-41a6-a825-d01d907c7763" containerID="494dc4d457caf70532bedc2c10a5ca239db3678d8b2c79eb3577d3cd10a7974f" exitCode=0 Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.197184 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" event={"ID":"923ff71f-a546-41a6-a825-d01d907c7763","Type":"ContainerDied","Data":"494dc4d457caf70532bedc2c10a5ca239db3678d8b2c79eb3577d3cd10a7974f"} Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.200158 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3decec05-c99c-4a71-9e7f-d98a8d52e92f","Type":"ContainerStarted","Data":"83963af02c4e34cd4f079d6a6722eb803c9ccde151a78f7b92df7c5a59a68532"} Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.200203 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.200226 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.561882 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.583324 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-config\") pod \"f625b657-d0ba-4597-8f08-93e1bca9faf5\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.583420 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-dns-swift-storage-0\") pod \"f625b657-d0ba-4597-8f08-93e1bca9faf5\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.583470 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-dns-svc\") pod \"f625b657-d0ba-4597-8f08-93e1bca9faf5\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.583497 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6bmg\" (UniqueName: \"kubernetes.io/projected/f625b657-d0ba-4597-8f08-93e1bca9faf5-kube-api-access-c6bmg\") pod \"f625b657-d0ba-4597-8f08-93e1bca9faf5\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.583591 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-ovsdbserver-sb\") pod \"f625b657-d0ba-4597-8f08-93e1bca9faf5\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.583613 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-ovsdbserver-nb\") pod \"f625b657-d0ba-4597-8f08-93e1bca9faf5\" (UID: \"f625b657-d0ba-4597-8f08-93e1bca9faf5\") " Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.612777 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f625b657-d0ba-4597-8f08-93e1bca9faf5-kube-api-access-c6bmg" (OuterVolumeSpecName: "kube-api-access-c6bmg") pod "f625b657-d0ba-4597-8f08-93e1bca9faf5" (UID: "f625b657-d0ba-4597-8f08-93e1bca9faf5"). InnerVolumeSpecName "kube-api-access-c6bmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.661937 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f625b657-d0ba-4597-8f08-93e1bca9faf5" (UID: "f625b657-d0ba-4597-8f08-93e1bca9faf5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.688414 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6bmg\" (UniqueName: \"kubernetes.io/projected/f625b657-d0ba-4597-8f08-93e1bca9faf5-kube-api-access-c6bmg\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.688685 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.692633 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f625b657-d0ba-4597-8f08-93e1bca9faf5" (UID: "f625b657-d0ba-4597-8f08-93e1bca9faf5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.710618 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f625b657-d0ba-4597-8f08-93e1bca9faf5" (UID: "f625b657-d0ba-4597-8f08-93e1bca9faf5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.711517 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-config" (OuterVolumeSpecName: "config") pod "f625b657-d0ba-4597-8f08-93e1bca9faf5" (UID: "f625b657-d0ba-4597-8f08-93e1bca9faf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.738612 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f625b657-d0ba-4597-8f08-93e1bca9faf5" (UID: "f625b657-d0ba-4597-8f08-93e1bca9faf5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.791611 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.791661 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.791677 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:46 crc kubenswrapper[4772]: I1128 11:25:46.791693 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f625b657-d0ba-4597-8f08-93e1bca9faf5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.224911 4772 generic.go:334] "Generic (PLEG): container finished" podID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerID="b6e99f5d4c66fece3ff14fcba2bc904f668bfa1de22030db8081c41b707b74fb" exitCode=0 Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.224978 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8f47327-6cc4-4ca2-9363-0eca9d129686","Type":"ContainerDied","Data":"b6e99f5d4c66fece3ff14fcba2bc904f668bfa1de22030db8081c41b707b74fb"} Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.230859 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.230861 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-h4cfz" event={"ID":"f625b657-d0ba-4597-8f08-93e1bca9faf5","Type":"ContainerDied","Data":"0bbb021d9d97c3e9db58aa3372613d353b7fc351ae74cb2c51810413bcfc4957"} Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.230960 4772 scope.go:117] "RemoveContainer" containerID="e8af265e22a0444442ab4cf07c32899df96b54d5a86c7096ce5a440d1400eb24" Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.322459 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-h4cfz"] Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.330604 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-h4cfz"] Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.689590 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.818086 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.920919 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8f47327-6cc4-4ca2-9363-0eca9d129686-run-httpd\") pod \"a8f47327-6cc4-4ca2-9363-0eca9d129686\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.921002 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-sg-core-conf-yaml\") pod \"a8f47327-6cc4-4ca2-9363-0eca9d129686\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.921035 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8f47327-6cc4-4ca2-9363-0eca9d129686-log-httpd\") pod \"a8f47327-6cc4-4ca2-9363-0eca9d129686\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.921075 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-scripts\") pod \"a8f47327-6cc4-4ca2-9363-0eca9d129686\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.921134 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-config-data\") pod \"a8f47327-6cc4-4ca2-9363-0eca9d129686\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.921185 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgw9c\" (UniqueName: \"kubernetes.io/projected/a8f47327-6cc4-4ca2-9363-0eca9d129686-kube-api-access-mgw9c\") pod \"a8f47327-6cc4-4ca2-9363-0eca9d129686\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.921208 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-combined-ca-bundle\") pod \"a8f47327-6cc4-4ca2-9363-0eca9d129686\" (UID: \"a8f47327-6cc4-4ca2-9363-0eca9d129686\") " Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.921867 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f47327-6cc4-4ca2-9363-0eca9d129686-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a8f47327-6cc4-4ca2-9363-0eca9d129686" (UID: "a8f47327-6cc4-4ca2-9363-0eca9d129686"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.929657 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f47327-6cc4-4ca2-9363-0eca9d129686-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a8f47327-6cc4-4ca2-9363-0eca9d129686" (UID: "a8f47327-6cc4-4ca2-9363-0eca9d129686"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.930532 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-scripts" (OuterVolumeSpecName: "scripts") pod "a8f47327-6cc4-4ca2-9363-0eca9d129686" (UID: "a8f47327-6cc4-4ca2-9363-0eca9d129686"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:47 crc kubenswrapper[4772]: I1128 11:25:47.938170 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f47327-6cc4-4ca2-9363-0eca9d129686-kube-api-access-mgw9c" (OuterVolumeSpecName: "kube-api-access-mgw9c") pod "a8f47327-6cc4-4ca2-9363-0eca9d129686" (UID: "a8f47327-6cc4-4ca2-9363-0eca9d129686"). InnerVolumeSpecName "kube-api-access-mgw9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.023373 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8f47327-6cc4-4ca2-9363-0eca9d129686-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.023405 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8f47327-6cc4-4ca2-9363-0eca9d129686-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.023416 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.023426 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgw9c\" (UniqueName: \"kubernetes.io/projected/a8f47327-6cc4-4ca2-9363-0eca9d129686-kube-api-access-mgw9c\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.028005 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f625b657-d0ba-4597-8f08-93e1bca9faf5" path="/var/lib/kubelet/pods/f625b657-d0ba-4597-8f08-93e1bca9faf5/volumes" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.102372 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a8f47327-6cc4-4ca2-9363-0eca9d129686" (UID: "a8f47327-6cc4-4ca2-9363-0eca9d129686"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.125081 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.151757 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8f47327-6cc4-4ca2-9363-0eca9d129686" (UID: "a8f47327-6cc4-4ca2-9363-0eca9d129686"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.166683 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-config-data" (OuterVolumeSpecName: "config-data") pod "a8f47327-6cc4-4ca2-9363-0eca9d129686" (UID: "a8f47327-6cc4-4ca2-9363-0eca9d129686"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.227306 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.227347 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f47327-6cc4-4ca2-9363-0eca9d129686-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.262845 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" event={"ID":"923ff71f-a546-41a6-a825-d01d907c7763","Type":"ContainerStarted","Data":"dc28d29af8f70c7cb4b515a47b7688cbd5d1918fd5c4fc8036912a89fbe3c272"} Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.262997 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.271291 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.271325 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8f47327-6cc4-4ca2-9363-0eca9d129686","Type":"ContainerDied","Data":"c118db470904123053baba4b76c5f4d3c0c4eb18f72de50346c9ae99cfd0d670"} Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.271469 4772 scope.go:117] "RemoveContainer" containerID="84ed15146652d9a3045c02270ec4c57a8ff32278121bf02137e9f63b69755875" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.284278 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86946c9dff-9km2x" event={"ID":"8f72d166-9d59-443d-9af2-3d93c158ef98","Type":"ContainerStarted","Data":"d82a6f92bd1ef8537508346765a21967f5ad8bd8f31caeb6c3c6fbd03f18f938"} Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.294957 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" event={"ID":"c591ea97-2d66-45fe-85e2-1c22c6af8218","Type":"ContainerStarted","Data":"026edffe373cb3e840eccc1a3853aee86bb9854d32e8da5b52ad446c3313b82a"} Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.392056 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" podStartSLOduration=5.392031967 podStartE2EDuration="5.392031967s" podCreationTimestamp="2025-11-28 11:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:48.312134666 +0000 UTC m=+1146.635377913" watchObservedRunningTime="2025-11-28 11:25:48.392031967 +0000 UTC m=+1146.715275194" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.400377 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-667fbdd95d-n4zbv"] Nov 28 11:25:48 crc kubenswrapper[4772]: E1128 11:25:48.400853 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerName="ceilometer-notification-agent" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.400875 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerName="ceilometer-notification-agent" Nov 28 11:25:48 crc kubenswrapper[4772]: E1128 11:25:48.400886 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerName="sg-core" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.400893 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerName="sg-core" Nov 28 11:25:48 crc kubenswrapper[4772]: E1128 11:25:48.400903 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f625b657-d0ba-4597-8f08-93e1bca9faf5" containerName="init" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.400909 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f625b657-d0ba-4597-8f08-93e1bca9faf5" containerName="init" Nov 28 11:25:48 crc kubenswrapper[4772]: E1128 11:25:48.400920 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerName="proxy-httpd" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.400926 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerName="proxy-httpd" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.401126 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f625b657-d0ba-4597-8f08-93e1bca9faf5" containerName="init" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.401139 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerName="sg-core" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.401154 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerName="proxy-httpd" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.401169 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" containerName="ceilometer-notification-agent" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.402262 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.405308 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.405545 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.418865 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-667fbdd95d-n4zbv"] Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.439870 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-config-data-custom\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.439929 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx24j\" (UniqueName: \"kubernetes.io/projected/863244d7-6e70-4ac3-a7f1-485205de6c8e-kube-api-access-jx24j\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.439964 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-internal-tls-certs\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.439984 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863244d7-6e70-4ac3-a7f1-485205de6c8e-logs\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.440047 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-config-data\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.440128 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-combined-ca-bundle\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.440161 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-public-tls-certs\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: E1128 11:25:48.493313 4772 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1d1ba065fb1ea7bfe45a2199ddac461b3346a232b61b2b96dbdfc893d3188907/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1d1ba065fb1ea7bfe45a2199ddac461b3346a232b61b2b96dbdfc893d3188907/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_a8f47327-6cc4-4ca2-9363-0eca9d129686/proxy-httpd/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_a8f47327-6cc4-4ca2-9363-0eca9d129686/proxy-httpd/0.log: no such file or directory Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.516870 4772 scope.go:117] "RemoveContainer" containerID="4bcb03b424e57104d98bfa4dc924fa675a63cd3a42ac261d8970a464069ac549" Nov 28 11:25:48 crc kubenswrapper[4772]: W1128 11:25:48.523033 4772 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf625b657_d0ba_4597_8f08_93e1bca9faf5.slice/crio-conmon-27df4ff99c8eb5f01dade35ee07839519c1e58cb41aac0db4d4e95260f987049.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf625b657_d0ba_4597_8f08_93e1bca9faf5.slice/crio-conmon-27df4ff99c8eb5f01dade35ee07839519c1e58cb41aac0db4d4e95260f987049.scope: no such file or directory Nov 28 11:25:48 crc kubenswrapper[4772]: W1128 11:25:48.523094 4772 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf625b657_d0ba_4597_8f08_93e1bca9faf5.slice/crio-27df4ff99c8eb5f01dade35ee07839519c1e58cb41aac0db4d4e95260f987049.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf625b657_d0ba_4597_8f08_93e1bca9faf5.slice/crio-27df4ff99c8eb5f01dade35ee07839519c1e58cb41aac0db4d4e95260f987049.scope: no such file or directory Nov 28 11:25:48 crc kubenswrapper[4772]: W1128 11:25:48.532807 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf625b657_d0ba_4597_8f08_93e1bca9faf5.slice/crio-e8af265e22a0444442ab4cf07c32899df96b54d5a86c7096ce5a440d1400eb24.scope WatchSource:0}: Error finding container e8af265e22a0444442ab4cf07c32899df96b54d5a86c7096ce5a440d1400eb24: Status 404 returned error can't find the container with id e8af265e22a0444442ab4cf07c32899df96b54d5a86c7096ce5a440d1400eb24 Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.541400 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-combined-ca-bundle\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.541551 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-public-tls-certs\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.541617 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-config-data-custom\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.541649 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx24j\" (UniqueName: \"kubernetes.io/projected/863244d7-6e70-4ac3-a7f1-485205de6c8e-kube-api-access-jx24j\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.541682 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-internal-tls-certs\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.541704 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863244d7-6e70-4ac3-a7f1-485205de6c8e-logs\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.541759 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-config-data\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.556582 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863244d7-6e70-4ac3-a7f1-485205de6c8e-logs\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.569043 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-public-tls-certs\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.570720 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-config-data\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.575389 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-config-data-custom\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.578930 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-internal-tls-certs\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.579912 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863244d7-6e70-4ac3-a7f1-485205de6c8e-combined-ca-bundle\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.580629 4772 scope.go:117] "RemoveContainer" containerID="b6e99f5d4c66fece3ff14fcba2bc904f668bfa1de22030db8081c41b707b74fb" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.583340 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx24j\" (UniqueName: \"kubernetes.io/projected/863244d7-6e70-4ac3-a7f1-485205de6c8e-kube-api-access-jx24j\") pod \"barbican-api-667fbdd95d-n4zbv\" (UID: \"863244d7-6e70-4ac3-a7f1-485205de6c8e\") " pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.610003 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.639436 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.651075 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.653962 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.657806 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.658024 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.667163 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.675768 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.746673 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-config-data\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.746750 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdmzx\" (UniqueName: \"kubernetes.io/projected/eee1b6b4-d945-405f-b39a-f1350090bd80-kube-api-access-cdmzx\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.746836 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-scripts\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.746918 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eee1b6b4-d945-405f-b39a-f1350090bd80-run-httpd\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.746943 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.746985 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.747773 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eee1b6b4-d945-405f-b39a-f1350090bd80-log-httpd\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.853667 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-config-data\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.854064 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdmzx\" (UniqueName: \"kubernetes.io/projected/eee1b6b4-d945-405f-b39a-f1350090bd80-kube-api-access-cdmzx\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.854105 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-scripts\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.854159 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eee1b6b4-d945-405f-b39a-f1350090bd80-run-httpd\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.854183 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.854225 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.854248 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eee1b6b4-d945-405f-b39a-f1350090bd80-log-httpd\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.854784 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eee1b6b4-d945-405f-b39a-f1350090bd80-log-httpd\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.855261 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eee1b6b4-d945-405f-b39a-f1350090bd80-run-httpd\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.859408 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-config-data\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.866063 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.868625 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-scripts\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.877066 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:48 crc kubenswrapper[4772]: I1128 11:25:48.882615 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdmzx\" (UniqueName: \"kubernetes.io/projected/eee1b6b4-d945-405f-b39a-f1350090bd80-kube-api-access-cdmzx\") pod \"ceilometer-0\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " pod="openstack/ceilometer-0" Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.016192 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.450680 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86946c9dff-9km2x" event={"ID":"8f72d166-9d59-443d-9af2-3d93c158ef98","Type":"ContainerStarted","Data":"d14b186d7770bfc8f8293f2240a8085935a512ad6ec9384d1a68023da3eacb51"} Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.451200 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-667fbdd95d-n4zbv"] Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.479622 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" event={"ID":"c591ea97-2d66-45fe-85e2-1c22c6af8218","Type":"ContainerStarted","Data":"c6420f3a4c1630546c1fb6aa2c6bfae349f1c424f732c2ae023f8316d8366f68"} Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.490646 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"529e270c-458a-45cf-bb5e-f2aecfa83b27","Type":"ContainerStarted","Data":"0effa7f2118567a9fe1f4e461ef0ac8c4b63a48e0701f358ff39fe260776ff23"} Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.531110 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-86946c9dff-9km2x" podStartSLOduration=5.138071721 podStartE2EDuration="9.531081508s" podCreationTimestamp="2025-11-28 11:25:40 +0000 UTC" firstStartedPulling="2025-11-28 11:25:42.940896176 +0000 UTC m=+1141.264139403" lastFinishedPulling="2025-11-28 11:25:47.333905963 +0000 UTC m=+1145.657149190" observedRunningTime="2025-11-28 11:25:49.484938336 +0000 UTC m=+1147.808181563" watchObservedRunningTime="2025-11-28 11:25:49.531081508 +0000 UTC m=+1147.854324735" Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.555588 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-bd5f4f5b6-n56r2" podStartSLOduration=5.154425618 podStartE2EDuration="9.555562056s" podCreationTimestamp="2025-11-28 11:25:40 +0000 UTC" firstStartedPulling="2025-11-28 11:25:42.996432693 +0000 UTC m=+1141.319675920" lastFinishedPulling="2025-11-28 11:25:47.397569131 +0000 UTC m=+1145.720812358" observedRunningTime="2025-11-28 11:25:49.547693101 +0000 UTC m=+1147.870936328" watchObservedRunningTime="2025-11-28 11:25:49.555562056 +0000 UTC m=+1147.878805283" Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.588339 4772 generic.go:334] "Generic (PLEG): container finished" podID="d7a16df0-9676-4d80-adc5-305fe795deb7" containerID="365a66e205b855db7d91d2c5985b9173a8efac7f9192c3919b01914f0b4d98cc" exitCode=137 Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.588404 4772 generic.go:334] "Generic (PLEG): container finished" podID="d7a16df0-9676-4d80-adc5-305fe795deb7" containerID="3b112d920152df95b59eecdfa5294a9cd6eb81355d916e94bc8804261cf839e7" exitCode=137 Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.588479 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75774664c5-v7rms" event={"ID":"d7a16df0-9676-4d80-adc5-305fe795deb7","Type":"ContainerDied","Data":"365a66e205b855db7d91d2c5985b9173a8efac7f9192c3919b01914f0b4d98cc"} Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.588514 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75774664c5-v7rms" event={"ID":"d7a16df0-9676-4d80-adc5-305fe795deb7","Type":"ContainerDied","Data":"3b112d920152df95b59eecdfa5294a9cd6eb81355d916e94bc8804261cf839e7"} Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.602132 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3decec05-c99c-4a71-9e7f-d98a8d52e92f" containerName="cinder-api-log" containerID="cri-o://83963af02c4e34cd4f079d6a6722eb803c9ccde151a78f7b92df7c5a59a68532" gracePeriod=30 Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.602514 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3decec05-c99c-4a71-9e7f-d98a8d52e92f","Type":"ContainerStarted","Data":"43099c8870b8eec5c4680bbaf07827212b7d0da8aa8c0eeb9a44ffa5aef63ee7"} Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.602563 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.602885 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3decec05-c99c-4a71-9e7f-d98a8d52e92f" containerName="cinder-api" containerID="cri-o://43099c8870b8eec5c4680bbaf07827212b7d0da8aa8c0eeb9a44ffa5aef63ee7" gracePeriod=30 Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.666885 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.666859425 podStartE2EDuration="6.666859425s" podCreationTimestamp="2025-11-28 11:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:49.640890409 +0000 UTC m=+1147.964133646" watchObservedRunningTime="2025-11-28 11:25:49.666859425 +0000 UTC m=+1147.990102652" Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.811485 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.855276 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.903132 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.918029 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7a16df0-9676-4d80-adc5-305fe795deb7-scripts\") pod \"d7a16df0-9676-4d80-adc5-305fe795deb7\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.918281 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rblm\" (UniqueName: \"kubernetes.io/projected/d7a16df0-9676-4d80-adc5-305fe795deb7-kube-api-access-4rblm\") pod \"d7a16df0-9676-4d80-adc5-305fe795deb7\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.918322 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7a16df0-9676-4d80-adc5-305fe795deb7-horizon-secret-key\") pod \"d7a16df0-9676-4d80-adc5-305fe795deb7\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.918426 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7a16df0-9676-4d80-adc5-305fe795deb7-logs\") pod \"d7a16df0-9676-4d80-adc5-305fe795deb7\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.918491 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7a16df0-9676-4d80-adc5-305fe795deb7-config-data\") pod \"d7a16df0-9676-4d80-adc5-305fe795deb7\" (UID: \"d7a16df0-9676-4d80-adc5-305fe795deb7\") " Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.921179 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7a16df0-9676-4d80-adc5-305fe795deb7-logs" (OuterVolumeSpecName: "logs") pod "d7a16df0-9676-4d80-adc5-305fe795deb7" (UID: "d7a16df0-9676-4d80-adc5-305fe795deb7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.929764 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a16df0-9676-4d80-adc5-305fe795deb7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d7a16df0-9676-4d80-adc5-305fe795deb7" (UID: "d7a16df0-9676-4d80-adc5-305fe795deb7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.936812 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a16df0-9676-4d80-adc5-305fe795deb7-kube-api-access-4rblm" (OuterVolumeSpecName: "kube-api-access-4rblm") pod "d7a16df0-9676-4d80-adc5-305fe795deb7" (UID: "d7a16df0-9676-4d80-adc5-305fe795deb7"). InnerVolumeSpecName "kube-api-access-4rblm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:49 crc kubenswrapper[4772]: I1128 11:25:49.967928 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a16df0-9676-4d80-adc5-305fe795deb7-scripts" (OuterVolumeSpecName: "scripts") pod "d7a16df0-9676-4d80-adc5-305fe795deb7" (UID: "d7a16df0-9676-4d80-adc5-305fe795deb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.014667 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a16df0-9676-4d80-adc5-305fe795deb7-config-data" (OuterVolumeSpecName: "config-data") pod "d7a16df0-9676-4d80-adc5-305fe795deb7" (UID: "d7a16df0-9676-4d80-adc5-305fe795deb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.021508 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7a16df0-9676-4d80-adc5-305fe795deb7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.021542 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7a16df0-9676-4d80-adc5-305fe795deb7-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.021555 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rblm\" (UniqueName: \"kubernetes.io/projected/d7a16df0-9676-4d80-adc5-305fe795deb7-kube-api-access-4rblm\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.021566 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7a16df0-9676-4d80-adc5-305fe795deb7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.021577 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7a16df0-9676-4d80-adc5-305fe795deb7-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.024737 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f47327-6cc4-4ca2-9363-0eca9d129686" path="/var/lib/kubelet/pods/a8f47327-6cc4-4ca2-9363-0eca9d129686/volumes" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.353390 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.652624 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75774664c5-v7rms" event={"ID":"d7a16df0-9676-4d80-adc5-305fe795deb7","Type":"ContainerDied","Data":"707f3902e9585e0e9fc75d4a0d7bfaed13fec2a062a647d066ac7b28079c2247"} Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.653157 4772 scope.go:117] "RemoveContainer" containerID="365a66e205b855db7d91d2c5985b9173a8efac7f9192c3919b01914f0b4d98cc" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.653430 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75774664c5-v7rms" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.686038 4772 generic.go:334] "Generic (PLEG): container finished" podID="3decec05-c99c-4a71-9e7f-d98a8d52e92f" containerID="43099c8870b8eec5c4680bbaf07827212b7d0da8aa8c0eeb9a44ffa5aef63ee7" exitCode=0 Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.686077 4772 generic.go:334] "Generic (PLEG): container finished" podID="3decec05-c99c-4a71-9e7f-d98a8d52e92f" containerID="83963af02c4e34cd4f079d6a6722eb803c9ccde151a78f7b92df7c5a59a68532" exitCode=143 Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.686131 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3decec05-c99c-4a71-9e7f-d98a8d52e92f","Type":"ContainerDied","Data":"43099c8870b8eec5c4680bbaf07827212b7d0da8aa8c0eeb9a44ffa5aef63ee7"} Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.686165 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3decec05-c99c-4a71-9e7f-d98a8d52e92f","Type":"ContainerDied","Data":"83963af02c4e34cd4f079d6a6722eb803c9ccde151a78f7b92df7c5a59a68532"} Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.697711 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"529e270c-458a-45cf-bb5e-f2aecfa83b27","Type":"ContainerStarted","Data":"26b9cdd0335530b93f409ca9276bdde27359129391ab7f466805ca1c260b2aa9"} Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.718312 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75774664c5-v7rms"] Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.735070 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75774664c5-v7rms"] Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.736997 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eee1b6b4-d945-405f-b39a-f1350090bd80","Type":"ContainerStarted","Data":"88140720669bbd08e183a234a6b1b90cf9e16eae64e22e504d58eaf68954bb72"} Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.747885 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.338558754 podStartE2EDuration="7.747856205s" podCreationTimestamp="2025-11-28 11:25:43 +0000 UTC" firstStartedPulling="2025-11-28 11:25:44.990976661 +0000 UTC m=+1143.314219878" lastFinishedPulling="2025-11-28 11:25:47.400274102 +0000 UTC m=+1145.723517329" observedRunningTime="2025-11-28 11:25:50.737157666 +0000 UTC m=+1149.060400893" watchObservedRunningTime="2025-11-28 11:25:50.747856205 +0000 UTC m=+1149.071099432" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.759291 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-667fbdd95d-n4zbv" event={"ID":"863244d7-6e70-4ac3-a7f1-485205de6c8e","Type":"ContainerStarted","Data":"3eb4db174bdaa555cf2e3b999410e992c5b839c1d3fc3c80caea2d05af5beec0"} Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.759335 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-667fbdd95d-n4zbv" event={"ID":"863244d7-6e70-4ac3-a7f1-485205de6c8e","Type":"ContainerStarted","Data":"8ded2b87b460ef36ca5a8fd01549747f0665cd45cb08cf4b6afb57581274d1b0"} Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.759347 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-667fbdd95d-n4zbv" event={"ID":"863244d7-6e70-4ac3-a7f1-485205de6c8e","Type":"ContainerStarted","Data":"03e3b855d87e731b160120df11c9ddf910cef01b9f511e897ddc57dc8d5f1658"} Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.760067 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.760089 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.788808 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-667fbdd95d-n4zbv" podStartSLOduration=2.788786271 podStartE2EDuration="2.788786271s" podCreationTimestamp="2025-11-28 11:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:50.780528356 +0000 UTC m=+1149.103771583" watchObservedRunningTime="2025-11-28 11:25:50.788786271 +0000 UTC m=+1149.112029498" Nov 28 11:25:50 crc kubenswrapper[4772]: I1128 11:25:50.961316 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.339737 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.526382 4772 scope.go:117] "RemoveContainer" containerID="3b112d920152df95b59eecdfa5294a9cd6eb81355d916e94bc8804261cf839e7" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.741125 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.783998 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.785024 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3decec05-c99c-4a71-9e7f-d98a8d52e92f","Type":"ContainerDied","Data":"f85d53dd56302a42f86783b7070856605c65fe9dacfd63ce887254079b3d1777"} Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.785069 4772 scope.go:117] "RemoveContainer" containerID="43099c8870b8eec5c4680bbaf07827212b7d0da8aa8c0eeb9a44ffa5aef63ee7" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.792130 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7v8x\" (UniqueName: \"kubernetes.io/projected/3decec05-c99c-4a71-9e7f-d98a8d52e92f-kube-api-access-m7v8x\") pod \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.792216 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-scripts\") pod \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.792296 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-config-data-custom\") pod \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.792352 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3decec05-c99c-4a71-9e7f-d98a8d52e92f-etc-machine-id\") pod \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.792420 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-config-data\") pod \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.792514 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-combined-ca-bundle\") pod \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.792664 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3decec05-c99c-4a71-9e7f-d98a8d52e92f-logs\") pod \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\" (UID: \"3decec05-c99c-4a71-9e7f-d98a8d52e92f\") " Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.794428 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3decec05-c99c-4a71-9e7f-d98a8d52e92f-logs" (OuterVolumeSpecName: "logs") pod "3decec05-c99c-4a71-9e7f-d98a8d52e92f" (UID: "3decec05-c99c-4a71-9e7f-d98a8d52e92f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.795546 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3decec05-c99c-4a71-9e7f-d98a8d52e92f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3decec05-c99c-4a71-9e7f-d98a8d52e92f" (UID: "3decec05-c99c-4a71-9e7f-d98a8d52e92f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.810753 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3decec05-c99c-4a71-9e7f-d98a8d52e92f" (UID: "3decec05-c99c-4a71-9e7f-d98a8d52e92f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.819609 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3decec05-c99c-4a71-9e7f-d98a8d52e92f-kube-api-access-m7v8x" (OuterVolumeSpecName: "kube-api-access-m7v8x") pod "3decec05-c99c-4a71-9e7f-d98a8d52e92f" (UID: "3decec05-c99c-4a71-9e7f-d98a8d52e92f"). InnerVolumeSpecName "kube-api-access-m7v8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.835592 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-scripts" (OuterVolumeSpecName: "scripts") pod "3decec05-c99c-4a71-9e7f-d98a8d52e92f" (UID: "3decec05-c99c-4a71-9e7f-d98a8d52e92f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.898398 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3decec05-c99c-4a71-9e7f-d98a8d52e92f-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.898449 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7v8x\" (UniqueName: \"kubernetes.io/projected/3decec05-c99c-4a71-9e7f-d98a8d52e92f-kube-api-access-m7v8x\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.898464 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.898480 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:51 crc kubenswrapper[4772]: I1128 11:25:51.898493 4772 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3decec05-c99c-4a71-9e7f-d98a8d52e92f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.011577 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a16df0-9676-4d80-adc5-305fe795deb7" path="/var/lib/kubelet/pods/d7a16df0-9676-4d80-adc5-305fe795deb7/volumes" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.059221 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3decec05-c99c-4a71-9e7f-d98a8d52e92f" (UID: "3decec05-c99c-4a71-9e7f-d98a8d52e92f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.097557 4772 scope.go:117] "RemoveContainer" containerID="83963af02c4e34cd4f079d6a6722eb803c9ccde151a78f7b92df7c5a59a68532" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.102976 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.136683 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-config-data" (OuterVolumeSpecName: "config-data") pod "3decec05-c99c-4a71-9e7f-d98a8d52e92f" (UID: "3decec05-c99c-4a71-9e7f-d98a8d52e92f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.205788 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3decec05-c99c-4a71-9e7f-d98a8d52e92f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.422891 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.434591 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.452905 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 11:25:52 crc kubenswrapper[4772]: E1128 11:25:52.453428 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3decec05-c99c-4a71-9e7f-d98a8d52e92f" containerName="cinder-api" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.453453 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3decec05-c99c-4a71-9e7f-d98a8d52e92f" containerName="cinder-api" Nov 28 11:25:52 crc kubenswrapper[4772]: E1128 11:25:52.453467 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a16df0-9676-4d80-adc5-305fe795deb7" containerName="horizon" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.453474 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a16df0-9676-4d80-adc5-305fe795deb7" containerName="horizon" Nov 28 11:25:52 crc kubenswrapper[4772]: E1128 11:25:52.453487 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3decec05-c99c-4a71-9e7f-d98a8d52e92f" containerName="cinder-api-log" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.453495 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3decec05-c99c-4a71-9e7f-d98a8d52e92f" containerName="cinder-api-log" Nov 28 11:25:52 crc kubenswrapper[4772]: E1128 11:25:52.453507 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a16df0-9676-4d80-adc5-305fe795deb7" containerName="horizon-log" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.453513 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a16df0-9676-4d80-adc5-305fe795deb7" containerName="horizon-log" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.453731 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3decec05-c99c-4a71-9e7f-d98a8d52e92f" containerName="cinder-api" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.453761 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3decec05-c99c-4a71-9e7f-d98a8d52e92f" containerName="cinder-api-log" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.453771 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a16df0-9676-4d80-adc5-305fe795deb7" containerName="horizon" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.453822 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a16df0-9676-4d80-adc5-305fe795deb7" containerName="horizon-log" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.455379 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.457998 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.458261 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.458464 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.474139 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.512067 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.512127 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-scripts\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.512155 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.512182 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-public-tls-certs\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.512321 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16a2193c-fc2c-489d-9aff-edfe826fdb75-etc-machine-id\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.512662 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52d6b\" (UniqueName: \"kubernetes.io/projected/16a2193c-fc2c-489d-9aff-edfe826fdb75-kube-api-access-52d6b\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.512755 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-config-data\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.513061 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-config-data-custom\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.513399 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16a2193c-fc2c-489d-9aff-edfe826fdb75-logs\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.615916 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.615967 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-scripts\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.615993 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.616023 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-public-tls-certs\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.616059 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16a2193c-fc2c-489d-9aff-edfe826fdb75-etc-machine-id\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.616124 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52d6b\" (UniqueName: \"kubernetes.io/projected/16a2193c-fc2c-489d-9aff-edfe826fdb75-kube-api-access-52d6b\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.616157 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-config-data\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.616197 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-config-data-custom\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.616225 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16a2193c-fc2c-489d-9aff-edfe826fdb75-logs\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.616486 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16a2193c-fc2c-489d-9aff-edfe826fdb75-etc-machine-id\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.617456 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16a2193c-fc2c-489d-9aff-edfe826fdb75-logs\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.626522 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.628927 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-public-tls-certs\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.631327 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-config-data-custom\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.633277 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-config-data\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.637530 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.639666 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52d6b\" (UniqueName: \"kubernetes.io/projected/16a2193c-fc2c-489d-9aff-edfe826fdb75-kube-api-access-52d6b\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.654154 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16a2193c-fc2c-489d-9aff-edfe826fdb75-scripts\") pod \"cinder-api-0\" (UID: \"16a2193c-fc2c-489d-9aff-edfe826fdb75\") " pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.712129 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6f99664784-xpqjq" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.779691 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68fd445458-gvkkx"] Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.779961 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68fd445458-gvkkx" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon-log" containerID="cri-o://13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a" gracePeriod=30 Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.780479 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68fd445458-gvkkx" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon" containerID="cri-o://8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5" gracePeriod=30 Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.786281 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.787860 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68fd445458-gvkkx" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Nov 28 11:25:52 crc kubenswrapper[4772]: I1128 11:25:52.827028 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eee1b6b4-d945-405f-b39a-f1350090bd80","Type":"ContainerStarted","Data":"7c232d42484490af6f5fb2347d76eba27dd1b8f17846488a1df3b805b4438fc1"} Nov 28 11:25:53 crc kubenswrapper[4772]: I1128 11:25:53.065154 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-758875cc6f-fmsqk" Nov 28 11:25:53 crc kubenswrapper[4772]: I1128 11:25:53.148630 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-95fd65484-v9p98"] Nov 28 11:25:53 crc kubenswrapper[4772]: I1128 11:25:53.149342 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-95fd65484-v9p98" podUID="3cb60a74-40d5-4c23-85ea-a5256ff13988" containerName="neutron-httpd" containerID="cri-o://10dafc5ee45eeee82aabdba12b565d2afd6f4f07ee51479d3dbabd2919963abb" gracePeriod=30 Nov 28 11:25:53 crc kubenswrapper[4772]: I1128 11:25:53.148926 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-95fd65484-v9p98" podUID="3cb60a74-40d5-4c23-85ea-a5256ff13988" containerName="neutron-api" containerID="cri-o://f7fa60ab485700246fa8a955b4aec3806c70a46c74a47c0ecb93756a90fa078b" gracePeriod=30 Nov 28 11:25:53 crc kubenswrapper[4772]: I1128 11:25:53.421013 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 11:25:53 crc kubenswrapper[4772]: W1128 11:25:53.431571 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16a2193c_fc2c_489d_9aff_edfe826fdb75.slice/crio-2aad301e947b02c95cadb08dddb8987c95316874216b2a8c3478f99c0f6b3f9a WatchSource:0}: Error finding container 2aad301e947b02c95cadb08dddb8987c95316874216b2a8c3478f99c0f6b3f9a: Status 404 returned error can't find the container with id 2aad301e947b02c95cadb08dddb8987c95316874216b2a8c3478f99c0f6b3f9a Nov 28 11:25:53 crc kubenswrapper[4772]: I1128 11:25:53.776309 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:25:53 crc kubenswrapper[4772]: I1128 11:25:53.930913 4772 generic.go:334] "Generic (PLEG): container finished" podID="3cb60a74-40d5-4c23-85ea-a5256ff13988" containerID="10dafc5ee45eeee82aabdba12b565d2afd6f4f07ee51479d3dbabd2919963abb" exitCode=0 Nov 28 11:25:53 crc kubenswrapper[4772]: I1128 11:25:53.931056 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95fd65484-v9p98" event={"ID":"3cb60a74-40d5-4c23-85ea-a5256ff13988","Type":"ContainerDied","Data":"10dafc5ee45eeee82aabdba12b565d2afd6f4f07ee51479d3dbabd2919963abb"} Nov 28 11:25:53 crc kubenswrapper[4772]: I1128 11:25:53.938136 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"16a2193c-fc2c-489d-9aff-edfe826fdb75","Type":"ContainerStarted","Data":"2aad301e947b02c95cadb08dddb8987c95316874216b2a8c3478f99c0f6b3f9a"} Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.012935 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3decec05-c99c-4a71-9e7f-d98a8d52e92f" path="/var/lib/kubelet/pods/3decec05-c99c-4a71-9e7f-d98a8d52e92f/volumes" Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.069561 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.186142 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-4fv6l"] Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.193599 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" podUID="52af4d7c-4347-47f2-8394-aeb9a51ae52f" containerName="dnsmasq-dns" containerID="cri-o://d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4" gracePeriod=10 Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.224847 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.229520 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="529e270c-458a-45cf-bb5e-f2aecfa83b27" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.161:8080/\": dial tcp 10.217.0.161:8080: connect: connection refused" Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.820658 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.888204 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-ovsdbserver-sb\") pod \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.888819 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-dns-swift-storage-0\") pod \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.889018 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-config\") pod \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.889212 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-ovsdbserver-nb\") pod \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.889330 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjxgg\" (UniqueName: \"kubernetes.io/projected/52af4d7c-4347-47f2-8394-aeb9a51ae52f-kube-api-access-hjxgg\") pod \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.889478 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-dns-svc\") pod \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\" (UID: \"52af4d7c-4347-47f2-8394-aeb9a51ae52f\") " Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.926016 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52af4d7c-4347-47f2-8394-aeb9a51ae52f-kube-api-access-hjxgg" (OuterVolumeSpecName: "kube-api-access-hjxgg") pod "52af4d7c-4347-47f2-8394-aeb9a51ae52f" (UID: "52af4d7c-4347-47f2-8394-aeb9a51ae52f"). InnerVolumeSpecName "kube-api-access-hjxgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.961246 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52af4d7c-4347-47f2-8394-aeb9a51ae52f" (UID: "52af4d7c-4347-47f2-8394-aeb9a51ae52f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.972652 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"16a2193c-fc2c-489d-9aff-edfe826fdb75","Type":"ContainerStarted","Data":"4a0c4f742fb34439ec59c95ef673aaa792f47b74f887d2e15f778707e670ac8f"} Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.988072 4772 generic.go:334] "Generic (PLEG): container finished" podID="52af4d7c-4347-47f2-8394-aeb9a51ae52f" containerID="d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4" exitCode=0 Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.988133 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" event={"ID":"52af4d7c-4347-47f2-8394-aeb9a51ae52f","Type":"ContainerDied","Data":"d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4"} Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.988171 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" event={"ID":"52af4d7c-4347-47f2-8394-aeb9a51ae52f","Type":"ContainerDied","Data":"412f8d68464f66aed72ae7dba1080c5d0bde45bd102d561e6120b7ff81bca3f8"} Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.988192 4772 scope.go:117] "RemoveContainer" containerID="d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4" Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.988338 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-4fv6l" Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.994026 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:54 crc kubenswrapper[4772]: I1128 11:25:54.994613 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjxgg\" (UniqueName: \"kubernetes.io/projected/52af4d7c-4347-47f2-8394-aeb9a51ae52f-kube-api-access-hjxgg\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.000479 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52af4d7c-4347-47f2-8394-aeb9a51ae52f" (UID: "52af4d7c-4347-47f2-8394-aeb9a51ae52f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.003015 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52af4d7c-4347-47f2-8394-aeb9a51ae52f" (UID: "52af4d7c-4347-47f2-8394-aeb9a51ae52f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.021819 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52af4d7c-4347-47f2-8394-aeb9a51ae52f" (UID: "52af4d7c-4347-47f2-8394-aeb9a51ae52f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.032267 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-config" (OuterVolumeSpecName: "config") pod "52af4d7c-4347-47f2-8394-aeb9a51ae52f" (UID: "52af4d7c-4347-47f2-8394-aeb9a51ae52f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.098581 4772 scope.go:117] "RemoveContainer" containerID="1327b463916d37e0e51b8b204627f2bff357603e78789782a59e0c2c023b6881" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.100802 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.100916 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.101001 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.101172 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af4d7c-4347-47f2-8394-aeb9a51ae52f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.139345 4772 scope.go:117] "RemoveContainer" containerID="d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4" Nov 28 11:25:55 crc kubenswrapper[4772]: E1128 11:25:55.140641 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4\": container with ID starting with d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4 not found: ID does not exist" containerID="d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.140709 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4"} err="failed to get container status \"d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4\": rpc error: code = NotFound desc = could not find container \"d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4\": container with ID starting with d7b57c06bb6e105e9397deee90b325461899f607161492e9292e0774e001cad4 not found: ID does not exist" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.140746 4772 scope.go:117] "RemoveContainer" containerID="1327b463916d37e0e51b8b204627f2bff357603e78789782a59e0c2c023b6881" Nov 28 11:25:55 crc kubenswrapper[4772]: E1128 11:25:55.142218 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1327b463916d37e0e51b8b204627f2bff357603e78789782a59e0c2c023b6881\": container with ID starting with 1327b463916d37e0e51b8b204627f2bff357603e78789782a59e0c2c023b6881 not found: ID does not exist" containerID="1327b463916d37e0e51b8b204627f2bff357603e78789782a59e0c2c023b6881" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.142286 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1327b463916d37e0e51b8b204627f2bff357603e78789782a59e0c2c023b6881"} err="failed to get container status \"1327b463916d37e0e51b8b204627f2bff357603e78789782a59e0c2c023b6881\": rpc error: code = NotFound desc = could not find container \"1327b463916d37e0e51b8b204627f2bff357603e78789782a59e0c2c023b6881\": container with ID starting with 1327b463916d37e0e51b8b204627f2bff357603e78789782a59e0c2c023b6881 not found: ID does not exist" Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.334597 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-4fv6l"] Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.345965 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-4fv6l"] Nov 28 11:25:55 crc kubenswrapper[4772]: I1128 11:25:55.943968 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68fd445458-gvkkx" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:60420->10.217.0.147:8443: read: connection reset by peer" Nov 28 11:25:56 crc kubenswrapper[4772]: I1128 11:25:56.012112 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52af4d7c-4347-47f2-8394-aeb9a51ae52f" path="/var/lib/kubelet/pods/52af4d7c-4347-47f2-8394-aeb9a51ae52f/volumes" Nov 28 11:25:56 crc kubenswrapper[4772]: I1128 11:25:56.013201 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 11:25:56 crc kubenswrapper[4772]: I1128 11:25:56.013228 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"16a2193c-fc2c-489d-9aff-edfe826fdb75","Type":"ContainerStarted","Data":"6737a6a49cdddc264a3135a47efed7ae019d14f2577c127f0e016a23fc9a385b"} Nov 28 11:25:56 crc kubenswrapper[4772]: I1128 11:25:56.071652 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.0716265289999996 podStartE2EDuration="4.071626529s" podCreationTimestamp="2025-11-28 11:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:25:56.065341785 +0000 UTC m=+1154.388585022" watchObservedRunningTime="2025-11-28 11:25:56.071626529 +0000 UTC m=+1154.394869756" Nov 28 11:25:56 crc kubenswrapper[4772]: I1128 11:25:56.487905 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68fd445458-gvkkx" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 28 11:25:57 crc kubenswrapper[4772]: I1128 11:25:57.021044 4772 generic.go:334] "Generic (PLEG): container finished" podID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerID="8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5" exitCode=0 Nov 28 11:25:57 crc kubenswrapper[4772]: I1128 11:25:57.021111 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68fd445458-gvkkx" event={"ID":"7eabec09-9340-4cd4-a7db-ec957878a3a0","Type":"ContainerDied","Data":"8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5"} Nov 28 11:25:57 crc kubenswrapper[4772]: I1128 11:25:57.027245 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eee1b6b4-d945-405f-b39a-f1350090bd80","Type":"ContainerStarted","Data":"dae1e37f4e06f4d925f4a0046d335d0a2c4a97df9e268e7ac1800b979b4c4d14"} Nov 28 11:25:58 crc kubenswrapper[4772]: I1128 11:25:58.065732 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eee1b6b4-d945-405f-b39a-f1350090bd80","Type":"ContainerStarted","Data":"574b6a1430f1e173035a3f03d7293c832639cb3829f0ead522c32d4dc1e98b2f"} Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.077470 4772 generic.go:334] "Generic (PLEG): container finished" podID="3cb60a74-40d5-4c23-85ea-a5256ff13988" containerID="f7fa60ab485700246fa8a955b4aec3806c70a46c74a47c0ecb93756a90fa078b" exitCode=0 Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.078494 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95fd65484-v9p98" event={"ID":"3cb60a74-40d5-4c23-85ea-a5256ff13988","Type":"ContainerDied","Data":"f7fa60ab485700246fa8a955b4aec3806c70a46c74a47c0ecb93756a90fa078b"} Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.083543 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eee1b6b4-d945-405f-b39a-f1350090bd80","Type":"ContainerStarted","Data":"2b8031fb7cd18e13cd61c2b61a0a9cc125f834798cdeb1a79ca02d30323e828c"} Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.084633 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.126445 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.499205778 podStartE2EDuration="11.126418044s" podCreationTimestamp="2025-11-28 11:25:48 +0000 UTC" firstStartedPulling="2025-11-28 11:25:49.929722933 +0000 UTC m=+1148.252966150" lastFinishedPulling="2025-11-28 11:25:58.556935189 +0000 UTC m=+1156.880178416" observedRunningTime="2025-11-28 11:25:59.109261267 +0000 UTC m=+1157.432504494" watchObservedRunningTime="2025-11-28 11:25:59.126418044 +0000 UTC m=+1157.449661271" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.173837 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.211219 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-config\") pod \"3cb60a74-40d5-4c23-85ea-a5256ff13988\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.211382 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j5bs\" (UniqueName: \"kubernetes.io/projected/3cb60a74-40d5-4c23-85ea-a5256ff13988-kube-api-access-4j5bs\") pod \"3cb60a74-40d5-4c23-85ea-a5256ff13988\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.211513 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-httpd-config\") pod \"3cb60a74-40d5-4c23-85ea-a5256ff13988\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.211588 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-combined-ca-bundle\") pod \"3cb60a74-40d5-4c23-85ea-a5256ff13988\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.211640 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-ovndb-tls-certs\") pod \"3cb60a74-40d5-4c23-85ea-a5256ff13988\" (UID: \"3cb60a74-40d5-4c23-85ea-a5256ff13988\") " Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.217236 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3cb60a74-40d5-4c23-85ea-a5256ff13988" (UID: "3cb60a74-40d5-4c23-85ea-a5256ff13988"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.221631 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb60a74-40d5-4c23-85ea-a5256ff13988-kube-api-access-4j5bs" (OuterVolumeSpecName: "kube-api-access-4j5bs") pod "3cb60a74-40d5-4c23-85ea-a5256ff13988" (UID: "3cb60a74-40d5-4c23-85ea-a5256ff13988"). InnerVolumeSpecName "kube-api-access-4j5bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.228183 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j5bs\" (UniqueName: \"kubernetes.io/projected/3cb60a74-40d5-4c23-85ea-a5256ff13988-kube-api-access-4j5bs\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.228483 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.269667 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cb60a74-40d5-4c23-85ea-a5256ff13988" (UID: "3cb60a74-40d5-4c23-85ea-a5256ff13988"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.272017 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-config" (OuterVolumeSpecName: "config") pod "3cb60a74-40d5-4c23-85ea-a5256ff13988" (UID: "3cb60a74-40d5-4c23-85ea-a5256ff13988"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.290403 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3cb60a74-40d5-4c23-85ea-a5256ff13988" (UID: "3cb60a74-40d5-4c23-85ea-a5256ff13988"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.331042 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.331090 4772 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.331105 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3cb60a74-40d5-4c23-85ea-a5256ff13988-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.433975 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 11:25:59 crc kubenswrapper[4772]: I1128 11:25:59.486858 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.095458 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95fd65484-v9p98" event={"ID":"3cb60a74-40d5-4c23-85ea-a5256ff13988","Type":"ContainerDied","Data":"7dcc947fe3b84bc73b1b2dd266b3b1ac3daf6f811eb9d35796977e0496cb407a"} Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.095593 4772 scope.go:117] "RemoveContainer" containerID="10dafc5ee45eeee82aabdba12b565d2afd6f4f07ee51479d3dbabd2919963abb" Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.095508 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95fd65484-v9p98" Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.096294 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="529e270c-458a-45cf-bb5e-f2aecfa83b27" containerName="probe" containerID="cri-o://26b9cdd0335530b93f409ca9276bdde27359129391ab7f466805ca1c260b2aa9" gracePeriod=30 Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.096302 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="529e270c-458a-45cf-bb5e-f2aecfa83b27" containerName="cinder-scheduler" containerID="cri-o://0effa7f2118567a9fe1f4e461ef0ac8c4b63a48e0701f358ff39fe260776ff23" gracePeriod=30 Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.143754 4772 scope.go:117] "RemoveContainer" containerID="f7fa60ab485700246fa8a955b4aec3806c70a46c74a47c0ecb93756a90fa078b" Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.180015 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-95fd65484-v9p98"] Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.193259 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-95fd65484-v9p98"] Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.289518 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.336957 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-667fbdd95d-n4zbv" Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.404612 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9c587cc74-7vk5q"] Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.404971 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9c587cc74-7vk5q" podUID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" containerName="barbican-api-log" containerID="cri-o://471d4641727ce30c311b0515c91ce0acd3b44af77ea44a7e0c2c094f8f400d2e" gracePeriod=30 Nov 28 11:26:00 crc kubenswrapper[4772]: I1128 11:26:00.405569 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9c587cc74-7vk5q" podUID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" containerName="barbican-api" containerID="cri-o://b93ccb2edc1836e9818c9445fe9e47601947eadf07f7f46f8149c62f9e8c595a" gracePeriod=30 Nov 28 11:26:01 crc kubenswrapper[4772]: I1128 11:26:01.114291 4772 generic.go:334] "Generic (PLEG): container finished" podID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" containerID="471d4641727ce30c311b0515c91ce0acd3b44af77ea44a7e0c2c094f8f400d2e" exitCode=143 Nov 28 11:26:01 crc kubenswrapper[4772]: I1128 11:26:01.114780 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9c587cc74-7vk5q" event={"ID":"dd73003f-bfe8-434c-a400-7d03fbd08d2f","Type":"ContainerDied","Data":"471d4641727ce30c311b0515c91ce0acd3b44af77ea44a7e0c2c094f8f400d2e"} Nov 28 11:26:01 crc kubenswrapper[4772]: I1128 11:26:01.822006 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:26:01 crc kubenswrapper[4772]: I1128 11:26:01.832280 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74fc54dcd4-9z4wp" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.023158 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb60a74-40d5-4c23-85ea-a5256ff13988" path="/var/lib/kubelet/pods/3cb60a74-40d5-4c23-85ea-a5256ff13988/volumes" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.145239 4772 generic.go:334] "Generic (PLEG): container finished" podID="529e270c-458a-45cf-bb5e-f2aecfa83b27" containerID="26b9cdd0335530b93f409ca9276bdde27359129391ab7f466805ca1c260b2aa9" exitCode=0 Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.145277 4772 generic.go:334] "Generic (PLEG): container finished" podID="529e270c-458a-45cf-bb5e-f2aecfa83b27" containerID="0effa7f2118567a9fe1f4e461ef0ac8c4b63a48e0701f358ff39fe260776ff23" exitCode=0 Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.146946 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"529e270c-458a-45cf-bb5e-f2aecfa83b27","Type":"ContainerDied","Data":"26b9cdd0335530b93f409ca9276bdde27359129391ab7f466805ca1c260b2aa9"} Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.147009 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"529e270c-458a-45cf-bb5e-f2aecfa83b27","Type":"ContainerDied","Data":"0effa7f2118567a9fe1f4e461ef0ac8c4b63a48e0701f358ff39fe260776ff23"} Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.147026 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"529e270c-458a-45cf-bb5e-f2aecfa83b27","Type":"ContainerDied","Data":"b231d574e9fc92ff4b452700f7d3041808207fc722389aab19728211ebf20eb2"} Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.147040 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b231d574e9fc92ff4b452700f7d3041808207fc722389aab19728211ebf20eb2" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.153253 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.211427 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-scripts\") pod \"529e270c-458a-45cf-bb5e-f2aecfa83b27\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.211530 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-combined-ca-bundle\") pod \"529e270c-458a-45cf-bb5e-f2aecfa83b27\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.211576 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-config-data\") pod \"529e270c-458a-45cf-bb5e-f2aecfa83b27\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.211617 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/529e270c-458a-45cf-bb5e-f2aecfa83b27-etc-machine-id\") pod \"529e270c-458a-45cf-bb5e-f2aecfa83b27\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.211655 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmvdk\" (UniqueName: \"kubernetes.io/projected/529e270c-458a-45cf-bb5e-f2aecfa83b27-kube-api-access-rmvdk\") pod \"529e270c-458a-45cf-bb5e-f2aecfa83b27\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.211821 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-config-data-custom\") pod \"529e270c-458a-45cf-bb5e-f2aecfa83b27\" (UID: \"529e270c-458a-45cf-bb5e-f2aecfa83b27\") " Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.213809 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/529e270c-458a-45cf-bb5e-f2aecfa83b27-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "529e270c-458a-45cf-bb5e-f2aecfa83b27" (UID: "529e270c-458a-45cf-bb5e-f2aecfa83b27"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.222860 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529e270c-458a-45cf-bb5e-f2aecfa83b27-kube-api-access-rmvdk" (OuterVolumeSpecName: "kube-api-access-rmvdk") pod "529e270c-458a-45cf-bb5e-f2aecfa83b27" (UID: "529e270c-458a-45cf-bb5e-f2aecfa83b27"). InnerVolumeSpecName "kube-api-access-rmvdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.227426 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "529e270c-458a-45cf-bb5e-f2aecfa83b27" (UID: "529e270c-458a-45cf-bb5e-f2aecfa83b27"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.232633 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-scripts" (OuterVolumeSpecName: "scripts") pod "529e270c-458a-45cf-bb5e-f2aecfa83b27" (UID: "529e270c-458a-45cf-bb5e-f2aecfa83b27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.304995 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "529e270c-458a-45cf-bb5e-f2aecfa83b27" (UID: "529e270c-458a-45cf-bb5e-f2aecfa83b27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.314263 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.314298 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.314310 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.314319 4772 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/529e270c-458a-45cf-bb5e-f2aecfa83b27-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.314327 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmvdk\" (UniqueName: \"kubernetes.io/projected/529e270c-458a-45cf-bb5e-f2aecfa83b27-kube-api-access-rmvdk\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.371506 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-config-data" (OuterVolumeSpecName: "config-data") pod "529e270c-458a-45cf-bb5e-f2aecfa83b27" (UID: "529e270c-458a-45cf-bb5e-f2aecfa83b27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.416533 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529e270c-458a-45cf-bb5e-f2aecfa83b27-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:02 crc kubenswrapper[4772]: I1128 11:26:02.674153 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-56987d8b67-lwl5z" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.157510 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.201780 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.217397 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.233391 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 11:26:03 crc kubenswrapper[4772]: E1128 11:26:03.233961 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb60a74-40d5-4c23-85ea-a5256ff13988" containerName="neutron-httpd" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.233993 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb60a74-40d5-4c23-85ea-a5256ff13988" containerName="neutron-httpd" Nov 28 11:26:03 crc kubenswrapper[4772]: E1128 11:26:03.234029 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52af4d7c-4347-47f2-8394-aeb9a51ae52f" containerName="init" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.234039 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52af4d7c-4347-47f2-8394-aeb9a51ae52f" containerName="init" Nov 28 11:26:03 crc kubenswrapper[4772]: E1128 11:26:03.234056 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52af4d7c-4347-47f2-8394-aeb9a51ae52f" containerName="dnsmasq-dns" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.234063 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52af4d7c-4347-47f2-8394-aeb9a51ae52f" containerName="dnsmasq-dns" Nov 28 11:26:03 crc kubenswrapper[4772]: E1128 11:26:03.234077 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529e270c-458a-45cf-bb5e-f2aecfa83b27" containerName="cinder-scheduler" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.234086 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="529e270c-458a-45cf-bb5e-f2aecfa83b27" containerName="cinder-scheduler" Nov 28 11:26:03 crc kubenswrapper[4772]: E1128 11:26:03.234106 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529e270c-458a-45cf-bb5e-f2aecfa83b27" containerName="probe" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.234116 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="529e270c-458a-45cf-bb5e-f2aecfa83b27" containerName="probe" Nov 28 11:26:03 crc kubenswrapper[4772]: E1128 11:26:03.234130 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb60a74-40d5-4c23-85ea-a5256ff13988" containerName="neutron-api" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.234156 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb60a74-40d5-4c23-85ea-a5256ff13988" containerName="neutron-api" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.234409 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52af4d7c-4347-47f2-8394-aeb9a51ae52f" containerName="dnsmasq-dns" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.234431 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb60a74-40d5-4c23-85ea-a5256ff13988" containerName="neutron-httpd" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.234446 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="529e270c-458a-45cf-bb5e-f2aecfa83b27" containerName="cinder-scheduler" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.234455 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb60a74-40d5-4c23-85ea-a5256ff13988" containerName="neutron-api" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.234472 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="529e270c-458a-45cf-bb5e-f2aecfa83b27" containerName="probe" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.235822 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.240937 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.242776 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.336955 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.337397 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.337436 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spqsx\" (UniqueName: \"kubernetes.io/projected/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-kube-api-access-spqsx\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.337482 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.337515 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.337551 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.439380 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.439480 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.439546 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.439610 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.439688 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.439692 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.439736 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spqsx\" (UniqueName: \"kubernetes.io/projected/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-kube-api-access-spqsx\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.443902 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.444925 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.444985 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.445665 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.481469 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spqsx\" (UniqueName: \"kubernetes.io/projected/7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c-kube-api-access-spqsx\") pod \"cinder-scheduler-0\" (UID: \"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c\") " pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.554588 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.617392 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9c587cc74-7vk5q" podUID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:39448->10.217.0.160:9311: read: connection reset by peer" Nov 28 11:26:03 crc kubenswrapper[4772]: I1128 11:26:03.617418 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9c587cc74-7vk5q" podUID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:39434->10.217.0.160:9311: read: connection reset by peer" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.019501 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="529e270c-458a-45cf-bb5e-f2aecfa83b27" path="/var/lib/kubelet/pods/529e270c-458a-45cf-bb5e-f2aecfa83b27/volumes" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.166762 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.205781 4772 generic.go:334] "Generic (PLEG): container finished" podID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" containerID="b93ccb2edc1836e9818c9445fe9e47601947eadf07f7f46f8149c62f9e8c595a" exitCode=0 Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.205874 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9c587cc74-7vk5q" event={"ID":"dd73003f-bfe8-434c-a400-7d03fbd08d2f","Type":"ContainerDied","Data":"b93ccb2edc1836e9818c9445fe9e47601947eadf07f7f46f8149c62f9e8c595a"} Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.205939 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9c587cc74-7vk5q" event={"ID":"dd73003f-bfe8-434c-a400-7d03fbd08d2f","Type":"ContainerDied","Data":"f4d1a4bdbc4fb47a453363b63cbff0b60b9974b03e1d949ef261407eaad73814"} Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.205955 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4d1a4bdbc4fb47a453363b63cbff0b60b9974b03e1d949ef261407eaad73814" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.276114 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.372707 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-config-data\") pod \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.372757 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl4ts\" (UniqueName: \"kubernetes.io/projected/dd73003f-bfe8-434c-a400-7d03fbd08d2f-kube-api-access-cl4ts\") pod \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.372814 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-config-data-custom\") pod \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.372927 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-combined-ca-bundle\") pod \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.373022 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd73003f-bfe8-434c-a400-7d03fbd08d2f-logs\") pod \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\" (UID: \"dd73003f-bfe8-434c-a400-7d03fbd08d2f\") " Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.373862 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd73003f-bfe8-434c-a400-7d03fbd08d2f-logs" (OuterVolumeSpecName: "logs") pod "dd73003f-bfe8-434c-a400-7d03fbd08d2f" (UID: "dd73003f-bfe8-434c-a400-7d03fbd08d2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.381922 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd73003f-bfe8-434c-a400-7d03fbd08d2f-kube-api-access-cl4ts" (OuterVolumeSpecName: "kube-api-access-cl4ts") pod "dd73003f-bfe8-434c-a400-7d03fbd08d2f" (UID: "dd73003f-bfe8-434c-a400-7d03fbd08d2f"). InnerVolumeSpecName "kube-api-access-cl4ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.387544 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dd73003f-bfe8-434c-a400-7d03fbd08d2f" (UID: "dd73003f-bfe8-434c-a400-7d03fbd08d2f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.424672 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd73003f-bfe8-434c-a400-7d03fbd08d2f" (UID: "dd73003f-bfe8-434c-a400-7d03fbd08d2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.432054 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-config-data" (OuterVolumeSpecName: "config-data") pod "dd73003f-bfe8-434c-a400-7d03fbd08d2f" (UID: "dd73003f-bfe8-434c-a400-7d03fbd08d2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.475920 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd73003f-bfe8-434c-a400-7d03fbd08d2f-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.475955 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.475966 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl4ts\" (UniqueName: \"kubernetes.io/projected/dd73003f-bfe8-434c-a400-7d03fbd08d2f-kube-api-access-cl4ts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.475976 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:04 crc kubenswrapper[4772]: I1128 11:26:04.475985 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd73003f-bfe8-434c-a400-7d03fbd08d2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:05 crc kubenswrapper[4772]: I1128 11:26:05.220022 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c","Type":"ContainerStarted","Data":"1d68b7dd1b9c47f941cc9e417db204a996f7fc4a4d6f1378b7c9850b7f4cb4cc"} Nov 28 11:26:05 crc kubenswrapper[4772]: I1128 11:26:05.220483 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c","Type":"ContainerStarted","Data":"519b66f1461cde016fd3700011a52f3db6062c5c482493ca3b7c3beecaf9a692"} Nov 28 11:26:05 crc kubenswrapper[4772]: I1128 11:26:05.220041 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9c587cc74-7vk5q" Nov 28 11:26:05 crc kubenswrapper[4772]: I1128 11:26:05.269928 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9c587cc74-7vk5q"] Nov 28 11:26:05 crc kubenswrapper[4772]: I1128 11:26:05.288194 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-9c587cc74-7vk5q"] Nov 28 11:26:05 crc kubenswrapper[4772]: I1128 11:26:05.451767 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.007092 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" path="/var/lib/kubelet/pods/dd73003f-bfe8-434c-a400-7d03fbd08d2f/volumes" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.141719 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 11:26:06 crc kubenswrapper[4772]: E1128 11:26:06.143310 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" containerName="barbican-api-log" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.149573 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" containerName="barbican-api-log" Nov 28 11:26:06 crc kubenswrapper[4772]: E1128 11:26:06.149739 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" containerName="barbican-api" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.149804 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" containerName="barbican-api" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.150709 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" containerName="barbican-api" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.150820 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd73003f-bfe8-434c-a400-7d03fbd08d2f" containerName="barbican-api-log" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.152231 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.167568 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.167944 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.184420 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-nqzzl" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.210894 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90047d06-b7f8-416f-a4f1-6f76b5b94f39-openstack-config\") pod \"openstackclient\" (UID: \"90047d06-b7f8-416f-a4f1-6f76b5b94f39\") " pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.210945 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j22p\" (UniqueName: \"kubernetes.io/projected/90047d06-b7f8-416f-a4f1-6f76b5b94f39-kube-api-access-6j22p\") pod \"openstackclient\" (UID: \"90047d06-b7f8-416f-a4f1-6f76b5b94f39\") " pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.211018 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90047d06-b7f8-416f-a4f1-6f76b5b94f39-openstack-config-secret\") pod \"openstackclient\" (UID: \"90047d06-b7f8-416f-a4f1-6f76b5b94f39\") " pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.211038 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90047d06-b7f8-416f-a4f1-6f76b5b94f39-combined-ca-bundle\") pod \"openstackclient\" (UID: \"90047d06-b7f8-416f-a4f1-6f76b5b94f39\") " pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.213936 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.232227 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c","Type":"ContainerStarted","Data":"81c39512d63eefb59323ae86037c49b92c65a6f2f5edfa986dfc3f996817eb47"} Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.265682 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.265661199 podStartE2EDuration="3.265661199s" podCreationTimestamp="2025-11-28 11:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:26:06.25919898 +0000 UTC m=+1164.582442217" watchObservedRunningTime="2025-11-28 11:26:06.265661199 +0000 UTC m=+1164.588904426" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.314692 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90047d06-b7f8-416f-a4f1-6f76b5b94f39-openstack-config\") pod \"openstackclient\" (UID: \"90047d06-b7f8-416f-a4f1-6f76b5b94f39\") " pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.314744 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j22p\" (UniqueName: \"kubernetes.io/projected/90047d06-b7f8-416f-a4f1-6f76b5b94f39-kube-api-access-6j22p\") pod \"openstackclient\" (UID: \"90047d06-b7f8-416f-a4f1-6f76b5b94f39\") " pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.314857 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90047d06-b7f8-416f-a4f1-6f76b5b94f39-openstack-config-secret\") pod \"openstackclient\" (UID: \"90047d06-b7f8-416f-a4f1-6f76b5b94f39\") " pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.314879 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90047d06-b7f8-416f-a4f1-6f76b5b94f39-combined-ca-bundle\") pod \"openstackclient\" (UID: \"90047d06-b7f8-416f-a4f1-6f76b5b94f39\") " pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.315899 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90047d06-b7f8-416f-a4f1-6f76b5b94f39-openstack-config\") pod \"openstackclient\" (UID: \"90047d06-b7f8-416f-a4f1-6f76b5b94f39\") " pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.323334 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90047d06-b7f8-416f-a4f1-6f76b5b94f39-openstack-config-secret\") pod \"openstackclient\" (UID: \"90047d06-b7f8-416f-a4f1-6f76b5b94f39\") " pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.326119 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90047d06-b7f8-416f-a4f1-6f76b5b94f39-combined-ca-bundle\") pod \"openstackclient\" (UID: \"90047d06-b7f8-416f-a4f1-6f76b5b94f39\") " pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.340220 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j22p\" (UniqueName: \"kubernetes.io/projected/90047d06-b7f8-416f-a4f1-6f76b5b94f39-kube-api-access-6j22p\") pod \"openstackclient\" (UID: \"90047d06-b7f8-416f-a4f1-6f76b5b94f39\") " pod="openstack/openstackclient" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.486936 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68fd445458-gvkkx" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 28 11:26:06 crc kubenswrapper[4772]: I1128 11:26:06.490614 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 11:26:07 crc kubenswrapper[4772]: I1128 11:26:07.048294 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 11:26:07 crc kubenswrapper[4772]: I1128 11:26:07.245437 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"90047d06-b7f8-416f-a4f1-6f76b5b94f39","Type":"ContainerStarted","Data":"b927b8f6f482f9420c7b4dfcc7f84e9dbcb5dd5804c571a8c3c7556fb4badba9"} Nov 28 11:26:08 crc kubenswrapper[4772]: I1128 11:26:08.554751 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.418236 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5d5fc6594c-kj2rm"] Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.419991 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.423353 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.424842 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.425006 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.437876 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d5fc6594c-kj2rm"] Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.487161 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c10f0c5-2315-469e-bda3-d3b66ab776e6-config-data\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.487218 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c10f0c5-2315-469e-bda3-d3b66ab776e6-combined-ca-bundle\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.487255 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c10f0c5-2315-469e-bda3-d3b66ab776e6-log-httpd\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.487294 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4c10f0c5-2315-469e-bda3-d3b66ab776e6-etc-swift\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.487333 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c10f0c5-2315-469e-bda3-d3b66ab776e6-public-tls-certs\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.487368 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c10f0c5-2315-469e-bda3-d3b66ab776e6-internal-tls-certs\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.487386 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c10f0c5-2315-469e-bda3-d3b66ab776e6-run-httpd\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.487433 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8txx\" (UniqueName: \"kubernetes.io/projected/4c10f0c5-2315-469e-bda3-d3b66ab776e6-kube-api-access-l8txx\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.589669 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c10f0c5-2315-469e-bda3-d3b66ab776e6-combined-ca-bundle\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.589734 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c10f0c5-2315-469e-bda3-d3b66ab776e6-log-httpd\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.589777 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4c10f0c5-2315-469e-bda3-d3b66ab776e6-etc-swift\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.589819 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c10f0c5-2315-469e-bda3-d3b66ab776e6-public-tls-certs\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.589842 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c10f0c5-2315-469e-bda3-d3b66ab776e6-internal-tls-certs\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.589859 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c10f0c5-2315-469e-bda3-d3b66ab776e6-run-httpd\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.589915 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8txx\" (UniqueName: \"kubernetes.io/projected/4c10f0c5-2315-469e-bda3-d3b66ab776e6-kube-api-access-l8txx\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.590020 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c10f0c5-2315-469e-bda3-d3b66ab776e6-config-data\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.591644 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c10f0c5-2315-469e-bda3-d3b66ab776e6-log-httpd\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.591761 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c10f0c5-2315-469e-bda3-d3b66ab776e6-run-httpd\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.601160 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c10f0c5-2315-469e-bda3-d3b66ab776e6-combined-ca-bundle\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.601532 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c10f0c5-2315-469e-bda3-d3b66ab776e6-config-data\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.601725 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c10f0c5-2315-469e-bda3-d3b66ab776e6-public-tls-certs\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.603010 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4c10f0c5-2315-469e-bda3-d3b66ab776e6-etc-swift\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.611662 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c10f0c5-2315-469e-bda3-d3b66ab776e6-internal-tls-certs\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.611876 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8txx\" (UniqueName: \"kubernetes.io/projected/4c10f0c5-2315-469e-bda3-d3b66ab776e6-kube-api-access-l8txx\") pod \"swift-proxy-5d5fc6594c-kj2rm\" (UID: \"4c10f0c5-2315-469e-bda3-d3b66ab776e6\") " pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:09 crc kubenswrapper[4772]: I1128 11:26:09.742939 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:10 crc kubenswrapper[4772]: I1128 11:26:10.396115 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d5fc6594c-kj2rm"] Nov 28 11:26:11 crc kubenswrapper[4772]: I1128 11:26:11.295727 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d5fc6594c-kj2rm" event={"ID":"4c10f0c5-2315-469e-bda3-d3b66ab776e6","Type":"ContainerStarted","Data":"7c6c2ac92f2d45b9e1350ac8a83ce6449ad929da638fb640182d0139916c3e2d"} Nov 28 11:26:11 crc kubenswrapper[4772]: I1128 11:26:11.296232 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:11 crc kubenswrapper[4772]: I1128 11:26:11.296249 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d5fc6594c-kj2rm" event={"ID":"4c10f0c5-2315-469e-bda3-d3b66ab776e6","Type":"ContainerStarted","Data":"86234a9affeb1d8ce4b4ab300ac314cf5c12fc789f9298d7288dc0b81c70ffd8"} Nov 28 11:26:11 crc kubenswrapper[4772]: I1128 11:26:11.296258 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d5fc6594c-kj2rm" event={"ID":"4c10f0c5-2315-469e-bda3-d3b66ab776e6","Type":"ContainerStarted","Data":"44d98c06c33b66542086a9fe9ed137c5973d9a16245a67d90665fac5f4004971"} Nov 28 11:26:11 crc kubenswrapper[4772]: I1128 11:26:11.296271 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:11 crc kubenswrapper[4772]: I1128 11:26:11.332124 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5d5fc6594c-kj2rm" podStartSLOduration=2.332100037 podStartE2EDuration="2.332100037s" podCreationTimestamp="2025-11-28 11:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:26:11.328944485 +0000 UTC m=+1169.652187712" watchObservedRunningTime="2025-11-28 11:26:11.332100037 +0000 UTC m=+1169.655343264" Nov 28 11:26:11 crc kubenswrapper[4772]: I1128 11:26:11.410230 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:11 crc kubenswrapper[4772]: I1128 11:26:11.411057 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="ceilometer-central-agent" containerID="cri-o://7c232d42484490af6f5fb2347d76eba27dd1b8f17846488a1df3b805b4438fc1" gracePeriod=30 Nov 28 11:26:11 crc kubenswrapper[4772]: I1128 11:26:11.411238 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="proxy-httpd" containerID="cri-o://2b8031fb7cd18e13cd61c2b61a0a9cc125f834798cdeb1a79ca02d30323e828c" gracePeriod=30 Nov 28 11:26:11 crc kubenswrapper[4772]: I1128 11:26:11.411480 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="ceilometer-notification-agent" containerID="cri-o://dae1e37f4e06f4d925f4a0046d335d0a2c4a97df9e268e7ac1800b979b4c4d14" gracePeriod=30 Nov 28 11:26:11 crc kubenswrapper[4772]: I1128 11:26:11.412639 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="sg-core" containerID="cri-o://574b6a1430f1e173035a3f03d7293c832639cb3829f0ead522c32d4dc1e98b2f" gracePeriod=30 Nov 28 11:26:11 crc kubenswrapper[4772]: I1128 11:26:11.429555 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.165:3000/\": EOF" Nov 28 11:26:12 crc kubenswrapper[4772]: I1128 11:26:12.319792 4772 generic.go:334] "Generic (PLEG): container finished" podID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerID="2b8031fb7cd18e13cd61c2b61a0a9cc125f834798cdeb1a79ca02d30323e828c" exitCode=0 Nov 28 11:26:12 crc kubenswrapper[4772]: I1128 11:26:12.319855 4772 generic.go:334] "Generic (PLEG): container finished" podID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerID="574b6a1430f1e173035a3f03d7293c832639cb3829f0ead522c32d4dc1e98b2f" exitCode=2 Nov 28 11:26:12 crc kubenswrapper[4772]: I1128 11:26:12.319865 4772 generic.go:334] "Generic (PLEG): container finished" podID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerID="7c232d42484490af6f5fb2347d76eba27dd1b8f17846488a1df3b805b4438fc1" exitCode=0 Nov 28 11:26:12 crc kubenswrapper[4772]: I1128 11:26:12.319852 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eee1b6b4-d945-405f-b39a-f1350090bd80","Type":"ContainerDied","Data":"2b8031fb7cd18e13cd61c2b61a0a9cc125f834798cdeb1a79ca02d30323e828c"} Nov 28 11:26:12 crc kubenswrapper[4772]: I1128 11:26:12.319911 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eee1b6b4-d945-405f-b39a-f1350090bd80","Type":"ContainerDied","Data":"574b6a1430f1e173035a3f03d7293c832639cb3829f0ead522c32d4dc1e98b2f"} Nov 28 11:26:12 crc kubenswrapper[4772]: I1128 11:26:12.319925 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eee1b6b4-d945-405f-b39a-f1350090bd80","Type":"ContainerDied","Data":"7c232d42484490af6f5fb2347d76eba27dd1b8f17846488a1df3b805b4438fc1"} Nov 28 11:26:13 crc kubenswrapper[4772]: I1128 11:26:13.811076 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 11:26:14 crc kubenswrapper[4772]: I1128 11:26:14.344822 4772 generic.go:334] "Generic (PLEG): container finished" podID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerID="dae1e37f4e06f4d925f4a0046d335d0a2c4a97df9e268e7ac1800b979b4c4d14" exitCode=0 Nov 28 11:26:14 crc kubenswrapper[4772]: I1128 11:26:14.344914 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eee1b6b4-d945-405f-b39a-f1350090bd80","Type":"ContainerDied","Data":"dae1e37f4e06f4d925f4a0046d335d0a2c4a97df9e268e7ac1800b979b4c4d14"} Nov 28 11:26:16 crc kubenswrapper[4772]: I1128 11:26:16.487628 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68fd445458-gvkkx" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 28 11:26:16 crc kubenswrapper[4772]: I1128 11:26:16.865121 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:26:16 crc kubenswrapper[4772]: I1128 11:26:16.865456 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" containerName="glance-log" containerID="cri-o://b3a3c15f75f1dd0d8ea8698890c19ce9db97fb2b51f97f6d52d3484996e6a097" gracePeriod=30 Nov 28 11:26:16 crc kubenswrapper[4772]: I1128 11:26:16.865585 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" containerName="glance-httpd" containerID="cri-o://c08587c649ab901237ef12fd2a7200bc78dc9fef2ee123b864a2db18290161cc" gracePeriod=30 Nov 28 11:26:17 crc kubenswrapper[4772]: I1128 11:26:17.382617 4772 generic.go:334] "Generic (PLEG): container finished" podID="2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" containerID="b3a3c15f75f1dd0d8ea8698890c19ce9db97fb2b51f97f6d52d3484996e6a097" exitCode=143 Nov 28 11:26:17 crc kubenswrapper[4772]: I1128 11:26:17.382686 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211","Type":"ContainerDied","Data":"b3a3c15f75f1dd0d8ea8698890c19ce9db97fb2b51f97f6d52d3484996e6a097"} Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.109964 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.110276 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cc9ee62a-cab5-4cac-afa0-18c776d8bab8" containerName="glance-log" containerID="cri-o://3ef314e38bd569ccddcb45b6dd790944a5a935910d9ae69d1381d79735b5df85" gracePeriod=30 Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.110456 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cc9ee62a-cab5-4cac-afa0-18c776d8bab8" containerName="glance-httpd" containerID="cri-o://ebf98bc0f3b316c0279cc09f0929c153b2826be708a628a1a7231be1621f255e" gracePeriod=30 Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.401300 4772 generic.go:334] "Generic (PLEG): container finished" podID="cc9ee62a-cab5-4cac-afa0-18c776d8bab8" containerID="3ef314e38bd569ccddcb45b6dd790944a5a935910d9ae69d1381d79735b5df85" exitCode=143 Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.401354 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc9ee62a-cab5-4cac-afa0-18c776d8bab8","Type":"ContainerDied","Data":"3ef314e38bd569ccddcb45b6dd790944a5a935910d9ae69d1381d79735b5df85"} Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.434512 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.508269 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eee1b6b4-d945-405f-b39a-f1350090bd80-log-httpd\") pod \"eee1b6b4-d945-405f-b39a-f1350090bd80\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.508428 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eee1b6b4-d945-405f-b39a-f1350090bd80-run-httpd\") pod \"eee1b6b4-d945-405f-b39a-f1350090bd80\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.508460 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-config-data\") pod \"eee1b6b4-d945-405f-b39a-f1350090bd80\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.508487 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdmzx\" (UniqueName: \"kubernetes.io/projected/eee1b6b4-d945-405f-b39a-f1350090bd80-kube-api-access-cdmzx\") pod \"eee1b6b4-d945-405f-b39a-f1350090bd80\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.508513 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-scripts\") pod \"eee1b6b4-d945-405f-b39a-f1350090bd80\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.508815 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eee1b6b4-d945-405f-b39a-f1350090bd80-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eee1b6b4-d945-405f-b39a-f1350090bd80" (UID: "eee1b6b4-d945-405f-b39a-f1350090bd80"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.509774 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eee1b6b4-d945-405f-b39a-f1350090bd80-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eee1b6b4-d945-405f-b39a-f1350090bd80" (UID: "eee1b6b4-d945-405f-b39a-f1350090bd80"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.515809 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-scripts" (OuterVolumeSpecName: "scripts") pod "eee1b6b4-d945-405f-b39a-f1350090bd80" (UID: "eee1b6b4-d945-405f-b39a-f1350090bd80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.516518 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee1b6b4-d945-405f-b39a-f1350090bd80-kube-api-access-cdmzx" (OuterVolumeSpecName: "kube-api-access-cdmzx") pod "eee1b6b4-d945-405f-b39a-f1350090bd80" (UID: "eee1b6b4-d945-405f-b39a-f1350090bd80"). InnerVolumeSpecName "kube-api-access-cdmzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.610873 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-combined-ca-bundle\") pod \"eee1b6b4-d945-405f-b39a-f1350090bd80\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.610970 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-sg-core-conf-yaml\") pod \"eee1b6b4-d945-405f-b39a-f1350090bd80\" (UID: \"eee1b6b4-d945-405f-b39a-f1350090bd80\") " Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.611591 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eee1b6b4-d945-405f-b39a-f1350090bd80-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.611612 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eee1b6b4-d945-405f-b39a-f1350090bd80-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.611623 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdmzx\" (UniqueName: \"kubernetes.io/projected/eee1b6b4-d945-405f-b39a-f1350090bd80-kube-api-access-cdmzx\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.611633 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.614548 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-config-data" (OuterVolumeSpecName: "config-data") pod "eee1b6b4-d945-405f-b39a-f1350090bd80" (UID: "eee1b6b4-d945-405f-b39a-f1350090bd80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.639675 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eee1b6b4-d945-405f-b39a-f1350090bd80" (UID: "eee1b6b4-d945-405f-b39a-f1350090bd80"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.676480 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eee1b6b4-d945-405f-b39a-f1350090bd80" (UID: "eee1b6b4-d945-405f-b39a-f1350090bd80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.713206 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.713252 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:18 crc kubenswrapper[4772]: I1128 11:26:18.713265 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee1b6b4-d945-405f-b39a-f1350090bd80-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.415125 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eee1b6b4-d945-405f-b39a-f1350090bd80","Type":"ContainerDied","Data":"88140720669bbd08e183a234a6b1b90cf9e16eae64e22e504d58eaf68954bb72"} Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.415687 4772 scope.go:117] "RemoveContainer" containerID="2b8031fb7cd18e13cd61c2b61a0a9cc125f834798cdeb1a79ca02d30323e828c" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.415268 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.417351 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"90047d06-b7f8-416f-a4f1-6f76b5b94f39","Type":"ContainerStarted","Data":"2dbf2a863570da80598498b79d60f8d0a11c5c1ac84842f729e1e41200e9f31f"} Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.452781 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.371020322 podStartE2EDuration="13.452749808s" podCreationTimestamp="2025-11-28 11:26:06 +0000 UTC" firstStartedPulling="2025-11-28 11:26:07.060525284 +0000 UTC m=+1165.383768511" lastFinishedPulling="2025-11-28 11:26:18.14225477 +0000 UTC m=+1176.465497997" observedRunningTime="2025-11-28 11:26:19.448458496 +0000 UTC m=+1177.771701783" watchObservedRunningTime="2025-11-28 11:26:19.452749808 +0000 UTC m=+1177.775993085" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.459931 4772 scope.go:117] "RemoveContainer" containerID="574b6a1430f1e173035a3f03d7293c832639cb3829f0ead522c32d4dc1e98b2f" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.494676 4772 scope.go:117] "RemoveContainer" containerID="dae1e37f4e06f4d925f4a0046d335d0a2c4a97df9e268e7ac1800b979b4c4d14" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.495303 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.515038 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.523189 4772 scope.go:117] "RemoveContainer" containerID="7c232d42484490af6f5fb2347d76eba27dd1b8f17846488a1df3b805b4438fc1" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.533087 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:19 crc kubenswrapper[4772]: E1128 11:26:19.533841 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="ceilometer-notification-agent" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.533873 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="ceilometer-notification-agent" Nov 28 11:26:19 crc kubenswrapper[4772]: E1128 11:26:19.533891 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="sg-core" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.533901 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="sg-core" Nov 28 11:26:19 crc kubenswrapper[4772]: E1128 11:26:19.533920 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="ceilometer-central-agent" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.533929 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="ceilometer-central-agent" Nov 28 11:26:19 crc kubenswrapper[4772]: E1128 11:26:19.533944 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="proxy-httpd" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.533952 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="proxy-httpd" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.534241 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="proxy-httpd" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.534262 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="ceilometer-central-agent" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.534290 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="ceilometer-notification-agent" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.534302 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" containerName="sg-core" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.537233 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.540794 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.541290 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.558560 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.634171 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.634236 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ca0319f-69a6-4083-ac8c-ea905b882e7e-run-httpd\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.634271 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-scripts\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.634336 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.634374 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ca0319f-69a6-4083-ac8c-ea905b882e7e-log-httpd\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.634395 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-config-data\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.634428 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fcn\" (UniqueName: \"kubernetes.io/projected/8ca0319f-69a6-4083-ac8c-ea905b882e7e-kube-api-access-z5fcn\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.736476 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.737615 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ca0319f-69a6-4083-ac8c-ea905b882e7e-run-httpd\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.737655 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-scripts\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.737737 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.737768 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ca0319f-69a6-4083-ac8c-ea905b882e7e-log-httpd\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.737790 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-config-data\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.737860 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5fcn\" (UniqueName: \"kubernetes.io/projected/8ca0319f-69a6-4083-ac8c-ea905b882e7e-kube-api-access-z5fcn\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.738783 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ca0319f-69a6-4083-ac8c-ea905b882e7e-run-httpd\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.739567 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ca0319f-69a6-4083-ac8c-ea905b882e7e-log-httpd\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.743756 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.750923 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-config-data\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.754866 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.759236 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.760674 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-scripts\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.763624 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5fcn\" (UniqueName: \"kubernetes.io/projected/8ca0319f-69a6-4083-ac8c-ea905b882e7e-kube-api-access-z5fcn\") pod \"ceilometer-0\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " pod="openstack/ceilometer-0" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.765495 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d5fc6594c-kj2rm" Nov 28 11:26:19 crc kubenswrapper[4772]: I1128 11:26:19.861789 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.009630 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eee1b6b4-d945-405f-b39a-f1350090bd80" path="/var/lib/kubelet/pods/eee1b6b4-d945-405f-b39a-f1350090bd80/volumes" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.450976 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.465373 4772 generic.go:334] "Generic (PLEG): container finished" podID="2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" containerID="c08587c649ab901237ef12fd2a7200bc78dc9fef2ee123b864a2db18290161cc" exitCode=0 Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.465462 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211","Type":"ContainerDied","Data":"c08587c649ab901237ef12fd2a7200bc78dc9fef2ee123b864a2db18290161cc"} Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.609240 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.692894 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-config-data\") pod \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.692956 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-scripts\") pod \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.693080 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-logs\") pod \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.693128 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-combined-ca-bundle\") pod \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.693172 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.693224 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-httpd-run\") pod \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.693419 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7822\" (UniqueName: \"kubernetes.io/projected/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-kube-api-access-m7822\") pod \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.693469 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-public-tls-certs\") pod \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\" (UID: \"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211\") " Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.695820 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" (UID: "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.698421 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-logs" (OuterVolumeSpecName: "logs") pod "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" (UID: "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.710850 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-kube-api-access-m7822" (OuterVolumeSpecName: "kube-api-access-m7822") pod "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" (UID: "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211"). InnerVolumeSpecName "kube-api-access-m7822". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.710910 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" (UID: "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.721559 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-scripts" (OuterVolumeSpecName: "scripts") pod "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" (UID: "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.730372 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" (UID: "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.766518 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-config-data" (OuterVolumeSpecName: "config-data") pod "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" (UID: "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.795922 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.795960 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.795969 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.795982 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.796011 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.796020 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.796030 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7822\" (UniqueName: \"kubernetes.io/projected/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-kube-api-access-m7822\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.822615 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" (UID: "2c0b02ba-a22b-434d-ab9f-6bc5d24ab211"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.838302 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.898838 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:20 crc kubenswrapper[4772]: I1128 11:26:20.898896 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.477668 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ca0319f-69a6-4083-ac8c-ea905b882e7e","Type":"ContainerStarted","Data":"73055ddaeb2a5c78036c5abf4006c904adb926e82ff730b84deedbb4482528a3"} Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.480418 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2c0b02ba-a22b-434d-ab9f-6bc5d24ab211","Type":"ContainerDied","Data":"f3dc590369f22a9b903e4dce3d296083638b546d8244dd4ac41d68f272112feb"} Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.480490 4772 scope.go:117] "RemoveContainer" containerID="c08587c649ab901237ef12fd2a7200bc78dc9fef2ee123b864a2db18290161cc" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.480499 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.518734 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.534012 4772 scope.go:117] "RemoveContainer" containerID="b3a3c15f75f1dd0d8ea8698890c19ce9db97fb2b51f97f6d52d3484996e6a097" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.544414 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.572341 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:26:21 crc kubenswrapper[4772]: E1128 11:26:21.572830 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" containerName="glance-log" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.572862 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" containerName="glance-log" Nov 28 11:26:21 crc kubenswrapper[4772]: E1128 11:26:21.572907 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" containerName="glance-httpd" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.572917 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" containerName="glance-httpd" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.573134 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" containerName="glance-log" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.573164 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" containerName="glance-httpd" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.574234 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.581932 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.582141 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.591291 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.615120 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94aac27d-0e3b-431a-be6e-a88e1eeb16db-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.615174 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmxdf\" (UniqueName: \"kubernetes.io/projected/94aac27d-0e3b-431a-be6e-a88e1eeb16db-kube-api-access-rmxdf\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.615211 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94aac27d-0e3b-431a-be6e-a88e1eeb16db-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.615249 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94aac27d-0e3b-431a-be6e-a88e1eeb16db-scripts\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.615285 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94aac27d-0e3b-431a-be6e-a88e1eeb16db-config-data\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.615319 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94aac27d-0e3b-431a-be6e-a88e1eeb16db-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.615380 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94aac27d-0e3b-431a-be6e-a88e1eeb16db-logs\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.615419 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.717543 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94aac27d-0e3b-431a-be6e-a88e1eeb16db-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.718073 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmxdf\" (UniqueName: \"kubernetes.io/projected/94aac27d-0e3b-431a-be6e-a88e1eeb16db-kube-api-access-rmxdf\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.718127 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94aac27d-0e3b-431a-be6e-a88e1eeb16db-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.718201 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94aac27d-0e3b-431a-be6e-a88e1eeb16db-scripts\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.718257 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94aac27d-0e3b-431a-be6e-a88e1eeb16db-config-data\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.718296 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94aac27d-0e3b-431a-be6e-a88e1eeb16db-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.718314 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94aac27d-0e3b-431a-be6e-a88e1eeb16db-logs\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.718384 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.718907 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.719623 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94aac27d-0e3b-431a-be6e-a88e1eeb16db-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.719637 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94aac27d-0e3b-431a-be6e-a88e1eeb16db-logs\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.723846 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94aac27d-0e3b-431a-be6e-a88e1eeb16db-scripts\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.726450 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94aac27d-0e3b-431a-be6e-a88e1eeb16db-config-data\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.727002 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94aac27d-0e3b-431a-be6e-a88e1eeb16db-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.728539 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94aac27d-0e3b-431a-be6e-a88e1eeb16db-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.741135 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmxdf\" (UniqueName: \"kubernetes.io/projected/94aac27d-0e3b-431a-be6e-a88e1eeb16db-kube-api-access-rmxdf\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.747854 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"94aac27d-0e3b-431a-be6e-a88e1eeb16db\") " pod="openstack/glance-default-external-api-0" Nov 28 11:26:21 crc kubenswrapper[4772]: I1128 11:26:21.896426 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.010675 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0b02ba-a22b-434d-ab9f-6bc5d24ab211" path="/var/lib/kubelet/pods/2c0b02ba-a22b-434d-ab9f-6bc5d24ab211/volumes" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.309588 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 11:26:22 crc kubenswrapper[4772]: W1128 11:26:22.321773 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94aac27d_0e3b_431a_be6e_a88e1eeb16db.slice/crio-528eb41349fad3789791ad616f558e1c56ecdb4d77aadeda29c2a3c2f6452e96 WatchSource:0}: Error finding container 528eb41349fad3789791ad616f558e1c56ecdb4d77aadeda29c2a3c2f6452e96: Status 404 returned error can't find the container with id 528eb41349fad3789791ad616f558e1c56ecdb4d77aadeda29c2a3c2f6452e96 Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.528499 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94aac27d-0e3b-431a-be6e-a88e1eeb16db","Type":"ContainerStarted","Data":"528eb41349fad3789791ad616f558e1c56ecdb4d77aadeda29c2a3c2f6452e96"} Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.535104 4772 generic.go:334] "Generic (PLEG): container finished" podID="cc9ee62a-cab5-4cac-afa0-18c776d8bab8" containerID="ebf98bc0f3b316c0279cc09f0929c153b2826be708a628a1a7231be1621f255e" exitCode=0 Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.535151 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc9ee62a-cab5-4cac-afa0-18c776d8bab8","Type":"ContainerDied","Data":"ebf98bc0f3b316c0279cc09f0929c153b2826be708a628a1a7231be1621f255e"} Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.538642 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ca0319f-69a6-4083-ac8c-ea905b882e7e","Type":"ContainerStarted","Data":"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25"} Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.796779 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.861013 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-internal-tls-certs\") pod \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.861207 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-combined-ca-bundle\") pod \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.861337 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-scripts\") pod \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.861416 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h52rs\" (UniqueName: \"kubernetes.io/projected/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-kube-api-access-h52rs\") pod \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.861488 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-logs\") pod \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.861603 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-httpd-run\") pod \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.861675 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-config-data\") pod \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.861768 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\" (UID: \"cc9ee62a-cab5-4cac-afa0-18c776d8bab8\") " Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.869173 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cc9ee62a-cab5-4cac-afa0-18c776d8bab8" (UID: "cc9ee62a-cab5-4cac-afa0-18c776d8bab8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.869665 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-logs" (OuterVolumeSpecName: "logs") pod "cc9ee62a-cab5-4cac-afa0-18c776d8bab8" (UID: "cc9ee62a-cab5-4cac-afa0-18c776d8bab8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.873266 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "cc9ee62a-cab5-4cac-afa0-18c776d8bab8" (UID: "cc9ee62a-cab5-4cac-afa0-18c776d8bab8"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.875061 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-scripts" (OuterVolumeSpecName: "scripts") pod "cc9ee62a-cab5-4cac-afa0-18c776d8bab8" (UID: "cc9ee62a-cab5-4cac-afa0-18c776d8bab8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.876828 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-kube-api-access-h52rs" (OuterVolumeSpecName: "kube-api-access-h52rs") pod "cc9ee62a-cab5-4cac-afa0-18c776d8bab8" (UID: "cc9ee62a-cab5-4cac-afa0-18c776d8bab8"). InnerVolumeSpecName "kube-api-access-h52rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.933544 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc9ee62a-cab5-4cac-afa0-18c776d8bab8" (UID: "cc9ee62a-cab5-4cac-afa0-18c776d8bab8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.967464 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.967774 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.968302 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.968399 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.968454 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h52rs\" (UniqueName: \"kubernetes.io/projected/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-kube-api-access-h52rs\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.968511 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:22 crc kubenswrapper[4772]: I1128 11:26:22.969134 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-config-data" (OuterVolumeSpecName: "config-data") pod "cc9ee62a-cab5-4cac-afa0-18c776d8bab8" (UID: "cc9ee62a-cab5-4cac-afa0-18c776d8bab8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.000011 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cc9ee62a-cab5-4cac-afa0-18c776d8bab8" (UID: "cc9ee62a-cab5-4cac-afa0-18c776d8bab8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.026628 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.058038 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.070401 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.070436 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.070446 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc9ee62a-cab5-4cac-afa0-18c776d8bab8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.400184 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.478818 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eabec09-9340-4cd4-a7db-ec957878a3a0-scripts\") pod \"7eabec09-9340-4cd4-a7db-ec957878a3a0\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.478926 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eabec09-9340-4cd4-a7db-ec957878a3a0-logs\") pod \"7eabec09-9340-4cd4-a7db-ec957878a3a0\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.479077 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7eabec09-9340-4cd4-a7db-ec957878a3a0-config-data\") pod \"7eabec09-9340-4cd4-a7db-ec957878a3a0\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.479163 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-combined-ca-bundle\") pod \"7eabec09-9340-4cd4-a7db-ec957878a3a0\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.479411 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-horizon-tls-certs\") pod \"7eabec09-9340-4cd4-a7db-ec957878a3a0\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.479456 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvkkp\" (UniqueName: \"kubernetes.io/projected/7eabec09-9340-4cd4-a7db-ec957878a3a0-kube-api-access-qvkkp\") pod \"7eabec09-9340-4cd4-a7db-ec957878a3a0\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.479520 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-horizon-secret-key\") pod \"7eabec09-9340-4cd4-a7db-ec957878a3a0\" (UID: \"7eabec09-9340-4cd4-a7db-ec957878a3a0\") " Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.480813 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eabec09-9340-4cd4-a7db-ec957878a3a0-logs" (OuterVolumeSpecName: "logs") pod "7eabec09-9340-4cd4-a7db-ec957878a3a0" (UID: "7eabec09-9340-4cd4-a7db-ec957878a3a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.484391 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7eabec09-9340-4cd4-a7db-ec957878a3a0" (UID: "7eabec09-9340-4cd4-a7db-ec957878a3a0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.488646 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eabec09-9340-4cd4-a7db-ec957878a3a0-kube-api-access-qvkkp" (OuterVolumeSpecName: "kube-api-access-qvkkp") pod "7eabec09-9340-4cd4-a7db-ec957878a3a0" (UID: "7eabec09-9340-4cd4-a7db-ec957878a3a0"). InnerVolumeSpecName "kube-api-access-qvkkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.518606 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eabec09-9340-4cd4-a7db-ec957878a3a0-config-data" (OuterVolumeSpecName: "config-data") pod "7eabec09-9340-4cd4-a7db-ec957878a3a0" (UID: "7eabec09-9340-4cd4-a7db-ec957878a3a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.541680 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7eabec09-9340-4cd4-a7db-ec957878a3a0" (UID: "7eabec09-9340-4cd4-a7db-ec957878a3a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.575135 4772 generic.go:334] "Generic (PLEG): container finished" podID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerID="13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a" exitCode=137 Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.575217 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68fd445458-gvkkx" event={"ID":"7eabec09-9340-4cd4-a7db-ec957878a3a0","Type":"ContainerDied","Data":"13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a"} Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.575253 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68fd445458-gvkkx" event={"ID":"7eabec09-9340-4cd4-a7db-ec957878a3a0","Type":"ContainerDied","Data":"f6d078ee348d24c33590db4fdf786727a35d67fb7ffa3040b83af2940435c09b"} Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.575274 4772 scope.go:117] "RemoveContainer" containerID="8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.575670 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68fd445458-gvkkx" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.579384 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eabec09-9340-4cd4-a7db-ec957878a3a0-scripts" (OuterVolumeSpecName: "scripts") pod "7eabec09-9340-4cd4-a7db-ec957878a3a0" (UID: "7eabec09-9340-4cd4-a7db-ec957878a3a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.581168 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvkkp\" (UniqueName: \"kubernetes.io/projected/7eabec09-9340-4cd4-a7db-ec957878a3a0-kube-api-access-qvkkp\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.581201 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.581211 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eabec09-9340-4cd4-a7db-ec957878a3a0-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.581222 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eabec09-9340-4cd4-a7db-ec957878a3a0-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.581232 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7eabec09-9340-4cd4-a7db-ec957878a3a0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.581243 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.588887 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc9ee62a-cab5-4cac-afa0-18c776d8bab8","Type":"ContainerDied","Data":"19d96e679f631f4a084ff56c8e73139c0f452eb1ad4c8c95a97cc1ed6a3a8d95"} Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.589265 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.615132 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "7eabec09-9340-4cd4-a7db-ec957878a3a0" (UID: "7eabec09-9340-4cd4-a7db-ec957878a3a0"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.652468 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.664485 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.683838 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:26:23 crc kubenswrapper[4772]: E1128 11:26:23.684836 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.684856 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon" Nov 28 11:26:23 crc kubenswrapper[4772]: E1128 11:26:23.684877 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9ee62a-cab5-4cac-afa0-18c776d8bab8" containerName="glance-httpd" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.684888 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9ee62a-cab5-4cac-afa0-18c776d8bab8" containerName="glance-httpd" Nov 28 11:26:23 crc kubenswrapper[4772]: E1128 11:26:23.684906 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9ee62a-cab5-4cac-afa0-18c776d8bab8" containerName="glance-log" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.684912 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9ee62a-cab5-4cac-afa0-18c776d8bab8" containerName="glance-log" Nov 28 11:26:23 crc kubenswrapper[4772]: E1128 11:26:23.684925 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon-log" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.684934 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon-log" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.685167 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.685189 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" containerName="horizon-log" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.685212 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc9ee62a-cab5-4cac-afa0-18c776d8bab8" containerName="glance-log" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.685229 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc9ee62a-cab5-4cac-afa0-18c776d8bab8" containerName="glance-httpd" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.685913 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eabec09-9340-4cd4-a7db-ec957878a3a0-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.686472 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.693157 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.693393 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.694341 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.766709 4772 scope.go:117] "RemoveContainer" containerID="13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.788124 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090e21bf-8aa4-47f0-99ce-d8225cdac91c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.788179 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/090e21bf-8aa4-47f0-99ce-d8225cdac91c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.788229 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t86k\" (UniqueName: \"kubernetes.io/projected/090e21bf-8aa4-47f0-99ce-d8225cdac91c-kube-api-access-9t86k\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.788261 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.788288 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/090e21bf-8aa4-47f0-99ce-d8225cdac91c-logs\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.788310 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090e21bf-8aa4-47f0-99ce-d8225cdac91c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.788335 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/090e21bf-8aa4-47f0-99ce-d8225cdac91c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.788390 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/090e21bf-8aa4-47f0-99ce-d8225cdac91c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.802635 4772 scope.go:117] "RemoveContainer" containerID="8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5" Nov 28 11:26:23 crc kubenswrapper[4772]: E1128 11:26:23.804623 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5\": container with ID starting with 8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5 not found: ID does not exist" containerID="8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.804676 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5"} err="failed to get container status \"8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5\": rpc error: code = NotFound desc = could not find container \"8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5\": container with ID starting with 8c298779730781bcf03dff9dee7b7d6a124a88f3438b03f6e35dab6e372388d5 not found: ID does not exist" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.804714 4772 scope.go:117] "RemoveContainer" containerID="13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a" Nov 28 11:26:23 crc kubenswrapper[4772]: E1128 11:26:23.805353 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a\": container with ID starting with 13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a not found: ID does not exist" containerID="13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.805417 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a"} err="failed to get container status \"13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a\": rpc error: code = NotFound desc = could not find container \"13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a\": container with ID starting with 13f17c4a0caab514585c5b369eeb1cd0a46f5cf4d06ccba0ade7be0f4ee6672a not found: ID does not exist" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.805450 4772 scope.go:117] "RemoveContainer" containerID="ebf98bc0f3b316c0279cc09f0929c153b2826be708a628a1a7231be1621f255e" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.842144 4772 scope.go:117] "RemoveContainer" containerID="3ef314e38bd569ccddcb45b6dd790944a5a935910d9ae69d1381d79735b5df85" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.891388 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/090e21bf-8aa4-47f0-99ce-d8225cdac91c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.891505 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090e21bf-8aa4-47f0-99ce-d8225cdac91c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.891539 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/090e21bf-8aa4-47f0-99ce-d8225cdac91c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.891579 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t86k\" (UniqueName: \"kubernetes.io/projected/090e21bf-8aa4-47f0-99ce-d8225cdac91c-kube-api-access-9t86k\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.891607 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.891631 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/090e21bf-8aa4-47f0-99ce-d8225cdac91c-logs\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.891658 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090e21bf-8aa4-47f0-99ce-d8225cdac91c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.891686 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/090e21bf-8aa4-47f0-99ce-d8225cdac91c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.892351 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.892392 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/090e21bf-8aa4-47f0-99ce-d8225cdac91c-logs\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.892497 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/090e21bf-8aa4-47f0-99ce-d8225cdac91c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.898613 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090e21bf-8aa4-47f0-99ce-d8225cdac91c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.901156 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/090e21bf-8aa4-47f0-99ce-d8225cdac91c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.906061 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/090e21bf-8aa4-47f0-99ce-d8225cdac91c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.906283 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090e21bf-8aa4-47f0-99ce-d8225cdac91c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:23 crc kubenswrapper[4772]: I1128 11:26:23.913659 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t86k\" (UniqueName: \"kubernetes.io/projected/090e21bf-8aa4-47f0-99ce-d8225cdac91c-kube-api-access-9t86k\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:24 crc kubenswrapper[4772]: I1128 11:26:23.941749 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68fd445458-gvkkx"] Nov 28 11:26:24 crc kubenswrapper[4772]: I1128 11:26:23.958847 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"090e21bf-8aa4-47f0-99ce-d8225cdac91c\") " pod="openstack/glance-default-internal-api-0" Nov 28 11:26:24 crc kubenswrapper[4772]: I1128 11:26:23.974411 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68fd445458-gvkkx"] Nov 28 11:26:24 crc kubenswrapper[4772]: I1128 11:26:24.009982 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eabec09-9340-4cd4-a7db-ec957878a3a0" path="/var/lib/kubelet/pods/7eabec09-9340-4cd4-a7db-ec957878a3a0/volumes" Nov 28 11:26:24 crc kubenswrapper[4772]: I1128 11:26:24.010820 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc9ee62a-cab5-4cac-afa0-18c776d8bab8" path="/var/lib/kubelet/pods/cc9ee62a-cab5-4cac-afa0-18c776d8bab8/volumes" Nov 28 11:26:24 crc kubenswrapper[4772]: I1128 11:26:24.053499 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:24 crc kubenswrapper[4772]: I1128 11:26:24.603773 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ca0319f-69a6-4083-ac8c-ea905b882e7e","Type":"ContainerStarted","Data":"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda"} Nov 28 11:26:24 crc kubenswrapper[4772]: I1128 11:26:24.604626 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ca0319f-69a6-4083-ac8c-ea905b882e7e","Type":"ContainerStarted","Data":"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930"} Nov 28 11:26:24 crc kubenswrapper[4772]: I1128 11:26:24.608095 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94aac27d-0e3b-431a-be6e-a88e1eeb16db","Type":"ContainerStarted","Data":"d01800fa25ee79ad7ea9f2a3f95bc315bcabd93a3c3e37828063abd0f217cec8"} Nov 28 11:26:24 crc kubenswrapper[4772]: I1128 11:26:24.608148 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94aac27d-0e3b-431a-be6e-a88e1eeb16db","Type":"ContainerStarted","Data":"5703b9c2b479af764b4e9cf015f1ba849ce16f7a23afa2db6b1b7dca772f2b5a"} Nov 28 11:26:24 crc kubenswrapper[4772]: I1128 11:26:24.637828 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.6378081570000003 podStartE2EDuration="3.637808157s" podCreationTimestamp="2025-11-28 11:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:26:24.632252273 +0000 UTC m=+1182.955495500" watchObservedRunningTime="2025-11-28 11:26:24.637808157 +0000 UTC m=+1182.961051384" Nov 28 11:26:24 crc kubenswrapper[4772]: I1128 11:26:24.709952 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 11:26:24 crc kubenswrapper[4772]: W1128 11:26:24.715666 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod090e21bf_8aa4_47f0_99ce_d8225cdac91c.slice/crio-06d5e1af5b05341eef92edae6f58c33cf1dbc9965e6d71e692e4d643c9bb2d39 WatchSource:0}: Error finding container 06d5e1af5b05341eef92edae6f58c33cf1dbc9965e6d71e692e4d643c9bb2d39: Status 404 returned error can't find the container with id 06d5e1af5b05341eef92edae6f58c33cf1dbc9965e6d71e692e4d643c9bb2d39 Nov 28 11:26:25 crc kubenswrapper[4772]: I1128 11:26:25.637334 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"090e21bf-8aa4-47f0-99ce-d8225cdac91c","Type":"ContainerStarted","Data":"c365c09f725ea13873ea69c4026235345bbcf28f57a475ca42703d13aa07a9e0"} Nov 28 11:26:25 crc kubenswrapper[4772]: I1128 11:26:25.638493 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"090e21bf-8aa4-47f0-99ce-d8225cdac91c","Type":"ContainerStarted","Data":"06d5e1af5b05341eef92edae6f58c33cf1dbc9965e6d71e692e4d643c9bb2d39"} Nov 28 11:26:26 crc kubenswrapper[4772]: I1128 11:26:26.646494 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"090e21bf-8aa4-47f0-99ce-d8225cdac91c","Type":"ContainerStarted","Data":"f7c86bee9db1b25b2e89f972d37070e9e5b314fa961a8f0f038f65fb64dde7ac"} Nov 28 11:26:26 crc kubenswrapper[4772]: I1128 11:26:26.649709 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ca0319f-69a6-4083-ac8c-ea905b882e7e","Type":"ContainerStarted","Data":"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc"} Nov 28 11:26:26 crc kubenswrapper[4772]: I1128 11:26:26.650061 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="sg-core" containerID="cri-o://7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda" gracePeriod=30 Nov 28 11:26:26 crc kubenswrapper[4772]: I1128 11:26:26.650110 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="ceilometer-notification-agent" containerID="cri-o://aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930" gracePeriod=30 Nov 28 11:26:26 crc kubenswrapper[4772]: I1128 11:26:26.650110 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="proxy-httpd" containerID="cri-o://e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc" gracePeriod=30 Nov 28 11:26:26 crc kubenswrapper[4772]: I1128 11:26:26.650210 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="ceilometer-central-agent" containerID="cri-o://03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25" gracePeriod=30 Nov 28 11:26:26 crc kubenswrapper[4772]: I1128 11:26:26.650070 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 11:26:26 crc kubenswrapper[4772]: I1128 11:26:26.720264 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.314550528 podStartE2EDuration="7.720230574s" podCreationTimestamp="2025-11-28 11:26:19 +0000 UTC" firstStartedPulling="2025-11-28 11:26:20.457511852 +0000 UTC m=+1178.780755079" lastFinishedPulling="2025-11-28 11:26:25.863191888 +0000 UTC m=+1184.186435125" observedRunningTime="2025-11-28 11:26:26.715062329 +0000 UTC m=+1185.038305566" watchObservedRunningTime="2025-11-28 11:26:26.720230574 +0000 UTC m=+1185.043473821" Nov 28 11:26:26 crc kubenswrapper[4772]: I1128 11:26:26.721217 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.721209519 podStartE2EDuration="3.721209519s" podCreationTimestamp="2025-11-28 11:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:26:26.686562436 +0000 UTC m=+1185.009805663" watchObservedRunningTime="2025-11-28 11:26:26.721209519 +0000 UTC m=+1185.044452756" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.571194 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.662408 4772 generic.go:334] "Generic (PLEG): container finished" podID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerID="e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc" exitCode=0 Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.662452 4772 generic.go:334] "Generic (PLEG): container finished" podID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerID="7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda" exitCode=2 Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.662460 4772 generic.go:334] "Generic (PLEG): container finished" podID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerID="aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930" exitCode=0 Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.662469 4772 generic.go:334] "Generic (PLEG): container finished" podID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerID="03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25" exitCode=0 Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.662927 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.663696 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ca0319f-69a6-4083-ac8c-ea905b882e7e","Type":"ContainerDied","Data":"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc"} Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.663733 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ca0319f-69a6-4083-ac8c-ea905b882e7e","Type":"ContainerDied","Data":"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda"} Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.663745 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ca0319f-69a6-4083-ac8c-ea905b882e7e","Type":"ContainerDied","Data":"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930"} Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.663753 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ca0319f-69a6-4083-ac8c-ea905b882e7e","Type":"ContainerDied","Data":"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25"} Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.663762 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ca0319f-69a6-4083-ac8c-ea905b882e7e","Type":"ContainerDied","Data":"73055ddaeb2a5c78036c5abf4006c904adb926e82ff730b84deedbb4482528a3"} Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.663779 4772 scope.go:117] "RemoveContainer" containerID="e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.681756 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ca0319f-69a6-4083-ac8c-ea905b882e7e-log-httpd\") pod \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.682215 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5fcn\" (UniqueName: \"kubernetes.io/projected/8ca0319f-69a6-4083-ac8c-ea905b882e7e-kube-api-access-z5fcn\") pod \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.682381 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-scripts\") pod \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.682442 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-config-data\") pod \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.682507 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ca0319f-69a6-4083-ac8c-ea905b882e7e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8ca0319f-69a6-4083-ac8c-ea905b882e7e" (UID: "8ca0319f-69a6-4083-ac8c-ea905b882e7e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.682550 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-combined-ca-bundle\") pod \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.682695 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ca0319f-69a6-4083-ac8c-ea905b882e7e-run-httpd\") pod \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.682744 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-sg-core-conf-yaml\") pod \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\" (UID: \"8ca0319f-69a6-4083-ac8c-ea905b882e7e\") " Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.683278 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ca0319f-69a6-4083-ac8c-ea905b882e7e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8ca0319f-69a6-4083-ac8c-ea905b882e7e" (UID: "8ca0319f-69a6-4083-ac8c-ea905b882e7e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.683814 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ca0319f-69a6-4083-ac8c-ea905b882e7e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.683841 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ca0319f-69a6-4083-ac8c-ea905b882e7e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.700615 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca0319f-69a6-4083-ac8c-ea905b882e7e-kube-api-access-z5fcn" (OuterVolumeSpecName: "kube-api-access-z5fcn") pod "8ca0319f-69a6-4083-ac8c-ea905b882e7e" (UID: "8ca0319f-69a6-4083-ac8c-ea905b882e7e"). InnerVolumeSpecName "kube-api-access-z5fcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.709735 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-scripts" (OuterVolumeSpecName: "scripts") pod "8ca0319f-69a6-4083-ac8c-ea905b882e7e" (UID: "8ca0319f-69a6-4083-ac8c-ea905b882e7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.713471 4772 scope.go:117] "RemoveContainer" containerID="7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.763439 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8ca0319f-69a6-4083-ac8c-ea905b882e7e" (UID: "8ca0319f-69a6-4083-ac8c-ea905b882e7e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.782647 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ca0319f-69a6-4083-ac8c-ea905b882e7e" (UID: "8ca0319f-69a6-4083-ac8c-ea905b882e7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.787053 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.787076 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.787696 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5fcn\" (UniqueName: \"kubernetes.io/projected/8ca0319f-69a6-4083-ac8c-ea905b882e7e-kube-api-access-z5fcn\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.787708 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.814151 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-config-data" (OuterVolumeSpecName: "config-data") pod "8ca0319f-69a6-4083-ac8c-ea905b882e7e" (UID: "8ca0319f-69a6-4083-ac8c-ea905b882e7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.852319 4772 scope.go:117] "RemoveContainer" containerID="aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.885787 4772 scope.go:117] "RemoveContainer" containerID="03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.890017 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca0319f-69a6-4083-ac8c-ea905b882e7e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.918710 4772 scope.go:117] "RemoveContainer" containerID="e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc" Nov 28 11:26:27 crc kubenswrapper[4772]: E1128 11:26:27.919186 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc\": container with ID starting with e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc not found: ID does not exist" containerID="e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.919242 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc"} err="failed to get container status \"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc\": rpc error: code = NotFound desc = could not find container \"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc\": container with ID starting with e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.919279 4772 scope.go:117] "RemoveContainer" containerID="7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda" Nov 28 11:26:27 crc kubenswrapper[4772]: E1128 11:26:27.919651 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda\": container with ID starting with 7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda not found: ID does not exist" containerID="7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.919685 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda"} err="failed to get container status \"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda\": rpc error: code = NotFound desc = could not find container \"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda\": container with ID starting with 7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.919704 4772 scope.go:117] "RemoveContainer" containerID="aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930" Nov 28 11:26:27 crc kubenswrapper[4772]: E1128 11:26:27.919889 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930\": container with ID starting with aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930 not found: ID does not exist" containerID="aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.919913 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930"} err="failed to get container status \"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930\": rpc error: code = NotFound desc = could not find container \"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930\": container with ID starting with aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930 not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.919929 4772 scope.go:117] "RemoveContainer" containerID="03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25" Nov 28 11:26:27 crc kubenswrapper[4772]: E1128 11:26:27.920137 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25\": container with ID starting with 03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25 not found: ID does not exist" containerID="03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.920167 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25"} err="failed to get container status \"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25\": rpc error: code = NotFound desc = could not find container \"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25\": container with ID starting with 03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25 not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.920184 4772 scope.go:117] "RemoveContainer" containerID="e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.920461 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc"} err="failed to get container status \"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc\": rpc error: code = NotFound desc = could not find container \"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc\": container with ID starting with e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.920497 4772 scope.go:117] "RemoveContainer" containerID="7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.920701 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda"} err="failed to get container status \"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda\": rpc error: code = NotFound desc = could not find container \"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda\": container with ID starting with 7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.920721 4772 scope.go:117] "RemoveContainer" containerID="aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.920892 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930"} err="failed to get container status \"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930\": rpc error: code = NotFound desc = could not find container \"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930\": container with ID starting with aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930 not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.920909 4772 scope.go:117] "RemoveContainer" containerID="03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.921091 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25"} err="failed to get container status \"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25\": rpc error: code = NotFound desc = could not find container \"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25\": container with ID starting with 03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25 not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.921112 4772 scope.go:117] "RemoveContainer" containerID="e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.921275 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc"} err="failed to get container status \"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc\": rpc error: code = NotFound desc = could not find container \"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc\": container with ID starting with e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.921291 4772 scope.go:117] "RemoveContainer" containerID="7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.921475 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda"} err="failed to get container status \"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda\": rpc error: code = NotFound desc = could not find container \"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda\": container with ID starting with 7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.921493 4772 scope.go:117] "RemoveContainer" containerID="aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.921702 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930"} err="failed to get container status \"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930\": rpc error: code = NotFound desc = could not find container \"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930\": container with ID starting with aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930 not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.921732 4772 scope.go:117] "RemoveContainer" containerID="03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.922028 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25"} err="failed to get container status \"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25\": rpc error: code = NotFound desc = could not find container \"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25\": container with ID starting with 03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25 not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.922060 4772 scope.go:117] "RemoveContainer" containerID="e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.922418 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc"} err="failed to get container status \"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc\": rpc error: code = NotFound desc = could not find container \"e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc\": container with ID starting with e1884f4e489bafde1ca40248e61377ac1541d3f35e972165b6a4d4a8116a39fc not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.922492 4772 scope.go:117] "RemoveContainer" containerID="7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.922839 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda"} err="failed to get container status \"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda\": rpc error: code = NotFound desc = could not find container \"7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda\": container with ID starting with 7fa0eff1217b7d3b075daf88f57f4684555ca3ce4213229557a16d66138d2eda not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.922876 4772 scope.go:117] "RemoveContainer" containerID="aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.923154 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930"} err="failed to get container status \"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930\": rpc error: code = NotFound desc = could not find container \"aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930\": container with ID starting with aa67c65bfea60eb4c73b3efa7e2446ddcd413f916d88feeba41bd87b4af39930 not found: ID does not exist" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.923189 4772 scope.go:117] "RemoveContainer" containerID="03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25" Nov 28 11:26:27 crc kubenswrapper[4772]: I1128 11:26:27.923466 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25"} err="failed to get container status \"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25\": rpc error: code = NotFound desc = could not find container \"03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25\": container with ID starting with 03a2bd475b73c816282a5a9855fab24f075fca98e7376bc45634480104cb1b25 not found: ID does not exist" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.008582 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.035699 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.047387 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:28 crc kubenswrapper[4772]: E1128 11:26:28.047944 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="ceilometer-central-agent" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.047969 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="ceilometer-central-agent" Nov 28 11:26:28 crc kubenswrapper[4772]: E1128 11:26:28.047997 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="ceilometer-notification-agent" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.048008 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="ceilometer-notification-agent" Nov 28 11:26:28 crc kubenswrapper[4772]: E1128 11:26:28.048063 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="proxy-httpd" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.048075 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="proxy-httpd" Nov 28 11:26:28 crc kubenswrapper[4772]: E1128 11:26:28.048088 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="sg-core" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.048096 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="sg-core" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.048351 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="ceilometer-central-agent" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.048393 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="sg-core" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.048409 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="proxy-httpd" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.048432 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" containerName="ceilometer-notification-agent" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.050989 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.053839 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.057894 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.059806 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.200685 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.200729 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.200867 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-config-data\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.200910 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-scripts\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.200938 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmfzn\" (UniqueName: \"kubernetes.io/projected/3a45fc37-a955-4087-aa75-d04a64407dae-kube-api-access-dmfzn\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.201115 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a45fc37-a955-4087-aa75-d04a64407dae-run-httpd\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.201154 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a45fc37-a955-4087-aa75-d04a64407dae-log-httpd\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.303979 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-scripts\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.304390 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmfzn\" (UniqueName: \"kubernetes.io/projected/3a45fc37-a955-4087-aa75-d04a64407dae-kube-api-access-dmfzn\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.304454 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a45fc37-a955-4087-aa75-d04a64407dae-run-httpd\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.304472 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a45fc37-a955-4087-aa75-d04a64407dae-log-httpd\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.304508 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.304525 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.304607 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-config-data\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.306047 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a45fc37-a955-4087-aa75-d04a64407dae-run-httpd\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.306890 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a45fc37-a955-4087-aa75-d04a64407dae-log-httpd\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.311606 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-config-data\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.312170 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.312891 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-scripts\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.312915 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.324354 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmfzn\" (UniqueName: \"kubernetes.io/projected/3a45fc37-a955-4087-aa75-d04a64407dae-kube-api-access-dmfzn\") pod \"ceilometer-0\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.372565 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:26:28 crc kubenswrapper[4772]: I1128 11:26:28.849942 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:28 crc kubenswrapper[4772]: W1128 11:26:28.851115 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a45fc37_a955_4087_aa75_d04a64407dae.slice/crio-995c19f63228d325004470bafdc8c4ba6dd7244035c66a9ae235ca56b8cee13e WatchSource:0}: Error finding container 995c19f63228d325004470bafdc8c4ba6dd7244035c66a9ae235ca56b8cee13e: Status 404 returned error can't find the container with id 995c19f63228d325004470bafdc8c4ba6dd7244035c66a9ae235ca56b8cee13e Nov 28 11:26:29 crc kubenswrapper[4772]: I1128 11:26:29.705266 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a45fc37-a955-4087-aa75-d04a64407dae","Type":"ContainerStarted","Data":"5cd8d04da882b8bd3bbd1b40dc78ec60c6eacb1008431e64e36aa3bf20927683"} Nov 28 11:26:29 crc kubenswrapper[4772]: I1128 11:26:29.705387 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a45fc37-a955-4087-aa75-d04a64407dae","Type":"ContainerStarted","Data":"995c19f63228d325004470bafdc8c4ba6dd7244035c66a9ae235ca56b8cee13e"} Nov 28 11:26:30 crc kubenswrapper[4772]: I1128 11:26:30.022128 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ca0319f-69a6-4083-ac8c-ea905b882e7e" path="/var/lib/kubelet/pods/8ca0319f-69a6-4083-ac8c-ea905b882e7e/volumes" Nov 28 11:26:30 crc kubenswrapper[4772]: I1128 11:26:30.718614 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a45fc37-a955-4087-aa75-d04a64407dae","Type":"ContainerStarted","Data":"9dc891759d8541eb6994daeaf61943dea484ff7a4a8aebbb8c5ab31ccc8479c3"} Nov 28 11:26:30 crc kubenswrapper[4772]: I1128 11:26:30.898690 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:31 crc kubenswrapper[4772]: I1128 11:26:31.731397 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a45fc37-a955-4087-aa75-d04a64407dae","Type":"ContainerStarted","Data":"d6c90e9fa21b67d45d08fd162bc554079cdd101804ee7965544401092cdc3b08"} Nov 28 11:26:31 crc kubenswrapper[4772]: I1128 11:26:31.896963 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 11:26:31 crc kubenswrapper[4772]: I1128 11:26:31.897024 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 11:26:31 crc kubenswrapper[4772]: I1128 11:26:31.939230 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 11:26:31 crc kubenswrapper[4772]: I1128 11:26:31.954332 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 11:26:32 crc kubenswrapper[4772]: I1128 11:26:32.740238 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 11:26:32 crc kubenswrapper[4772]: I1128 11:26:32.740822 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.618372 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vzlbg"] Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.620351 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vzlbg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.648851 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vzlbg"] Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.716136 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2q5qg"] Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.717578 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2q5qg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.724630 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2q5qg"] Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.732190 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h57t\" (UniqueName: \"kubernetes.io/projected/11fb5581-331a-4b2b-9cae-ca7679b297b2-kube-api-access-4h57t\") pod \"nova-api-db-create-vzlbg\" (UID: \"11fb5581-331a-4b2b-9cae-ca7679b297b2\") " pod="openstack/nova-api-db-create-vzlbg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.732305 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11fb5581-331a-4b2b-9cae-ca7679b297b2-operator-scripts\") pod \"nova-api-db-create-vzlbg\" (UID: \"11fb5581-331a-4b2b-9cae-ca7679b297b2\") " pod="openstack/nova-api-db-create-vzlbg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.757521 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="ceilometer-central-agent" containerID="cri-o://5cd8d04da882b8bd3bbd1b40dc78ec60c6eacb1008431e64e36aa3bf20927683" gracePeriod=30 Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.757949 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a45fc37-a955-4087-aa75-d04a64407dae","Type":"ContainerStarted","Data":"d8632e84c17b30152bb522c389516d6f695613ae14302cee84f16b2bf422b778"} Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.758037 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.758308 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="proxy-httpd" containerID="cri-o://d8632e84c17b30152bb522c389516d6f695613ae14302cee84f16b2bf422b778" gracePeriod=30 Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.758377 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="sg-core" containerID="cri-o://d6c90e9fa21b67d45d08fd162bc554079cdd101804ee7965544401092cdc3b08" gracePeriod=30 Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.758416 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="ceilometer-notification-agent" containerID="cri-o://9dc891759d8541eb6994daeaf61943dea484ff7a4a8aebbb8c5ab31ccc8479c3" gracePeriod=30 Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.823712 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.085284402 podStartE2EDuration="5.823689296s" podCreationTimestamp="2025-11-28 11:26:28 +0000 UTC" firstStartedPulling="2025-11-28 11:26:28.857923619 +0000 UTC m=+1187.181166846" lastFinishedPulling="2025-11-28 11:26:32.596328513 +0000 UTC m=+1190.919571740" observedRunningTime="2025-11-28 11:26:33.793836738 +0000 UTC m=+1192.117079965" watchObservedRunningTime="2025-11-28 11:26:33.823689296 +0000 UTC m=+1192.146932523" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.826956 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-115c-account-create-update-l4s9j"] Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.828498 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-115c-account-create-update-l4s9j" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.831900 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.833911 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11fb5581-331a-4b2b-9cae-ca7679b297b2-operator-scripts\") pod \"nova-api-db-create-vzlbg\" (UID: \"11fb5581-331a-4b2b-9cae-ca7679b297b2\") " pod="openstack/nova-api-db-create-vzlbg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.834069 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s2qb\" (UniqueName: \"kubernetes.io/projected/fe1b7037-466b-4290-be1a-7ede41184bfc-kube-api-access-7s2qb\") pod \"nova-cell0-db-create-2q5qg\" (UID: \"fe1b7037-466b-4290-be1a-7ede41184bfc\") " pod="openstack/nova-cell0-db-create-2q5qg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.834115 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h57t\" (UniqueName: \"kubernetes.io/projected/11fb5581-331a-4b2b-9cae-ca7679b297b2-kube-api-access-4h57t\") pod \"nova-api-db-create-vzlbg\" (UID: \"11fb5581-331a-4b2b-9cae-ca7679b297b2\") " pod="openstack/nova-api-db-create-vzlbg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.834145 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe1b7037-466b-4290-be1a-7ede41184bfc-operator-scripts\") pod \"nova-cell0-db-create-2q5qg\" (UID: \"fe1b7037-466b-4290-be1a-7ede41184bfc\") " pod="openstack/nova-cell0-db-create-2q5qg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.836077 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11fb5581-331a-4b2b-9cae-ca7679b297b2-operator-scripts\") pod \"nova-api-db-create-vzlbg\" (UID: \"11fb5581-331a-4b2b-9cae-ca7679b297b2\") " pod="openstack/nova-api-db-create-vzlbg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.849534 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-115c-account-create-update-l4s9j"] Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.855871 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h57t\" (UniqueName: \"kubernetes.io/projected/11fb5581-331a-4b2b-9cae-ca7679b297b2-kube-api-access-4h57t\") pod \"nova-api-db-create-vzlbg\" (UID: \"11fb5581-331a-4b2b-9cae-ca7679b297b2\") " pod="openstack/nova-api-db-create-vzlbg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.908573 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mkk4l"] Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.910805 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mkk4l" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.917739 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mkk4l"] Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.935856 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vltx8\" (UniqueName: \"kubernetes.io/projected/27a2af00-c296-4e60-849c-e0157763aaa8-kube-api-access-vltx8\") pod \"nova-api-115c-account-create-update-l4s9j\" (UID: \"27a2af00-c296-4e60-849c-e0157763aaa8\") " pod="openstack/nova-api-115c-account-create-update-l4s9j" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.935949 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2qb\" (UniqueName: \"kubernetes.io/projected/fe1b7037-466b-4290-be1a-7ede41184bfc-kube-api-access-7s2qb\") pod \"nova-cell0-db-create-2q5qg\" (UID: \"fe1b7037-466b-4290-be1a-7ede41184bfc\") " pod="openstack/nova-cell0-db-create-2q5qg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.936018 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe1b7037-466b-4290-be1a-7ede41184bfc-operator-scripts\") pod \"nova-cell0-db-create-2q5qg\" (UID: \"fe1b7037-466b-4290-be1a-7ede41184bfc\") " pod="openstack/nova-cell0-db-create-2q5qg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.936123 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27a2af00-c296-4e60-849c-e0157763aaa8-operator-scripts\") pod \"nova-api-115c-account-create-update-l4s9j\" (UID: \"27a2af00-c296-4e60-849c-e0157763aaa8\") " pod="openstack/nova-api-115c-account-create-update-l4s9j" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.937230 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe1b7037-466b-4290-be1a-7ede41184bfc-operator-scripts\") pod \"nova-cell0-db-create-2q5qg\" (UID: \"fe1b7037-466b-4290-be1a-7ede41184bfc\") " pod="openstack/nova-cell0-db-create-2q5qg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.943027 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vzlbg" Nov 28 11:26:33 crc kubenswrapper[4772]: I1128 11:26:33.955068 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2qb\" (UniqueName: \"kubernetes.io/projected/fe1b7037-466b-4290-be1a-7ede41184bfc-kube-api-access-7s2qb\") pod \"nova-cell0-db-create-2q5qg\" (UID: \"fe1b7037-466b-4290-be1a-7ede41184bfc\") " pod="openstack/nova-cell0-db-create-2q5qg" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.033180 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2q5qg" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.037865 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lznw2\" (UniqueName: \"kubernetes.io/projected/10e51d4a-d9ff-4393-bb1f-5ad90c13096f-kube-api-access-lznw2\") pod \"nova-cell1-db-create-mkk4l\" (UID: \"10e51d4a-d9ff-4393-bb1f-5ad90c13096f\") " pod="openstack/nova-cell1-db-create-mkk4l" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.038257 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27a2af00-c296-4e60-849c-e0157763aaa8-operator-scripts\") pod \"nova-api-115c-account-create-update-l4s9j\" (UID: \"27a2af00-c296-4e60-849c-e0157763aaa8\") " pod="openstack/nova-api-115c-account-create-update-l4s9j" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.038311 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10e51d4a-d9ff-4393-bb1f-5ad90c13096f-operator-scripts\") pod \"nova-cell1-db-create-mkk4l\" (UID: \"10e51d4a-d9ff-4393-bb1f-5ad90c13096f\") " pod="openstack/nova-cell1-db-create-mkk4l" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.038347 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vltx8\" (UniqueName: \"kubernetes.io/projected/27a2af00-c296-4e60-849c-e0157763aaa8-kube-api-access-vltx8\") pod \"nova-api-115c-account-create-update-l4s9j\" (UID: \"27a2af00-c296-4e60-849c-e0157763aaa8\") " pod="openstack/nova-api-115c-account-create-update-l4s9j" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.052910 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-a871-account-create-update-8tsff"] Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.056586 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a871-account-create-update-8tsff" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.059346 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.063329 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27a2af00-c296-4e60-849c-e0157763aaa8-operator-scripts\") pod \"nova-api-115c-account-create-update-l4s9j\" (UID: \"27a2af00-c296-4e60-849c-e0157763aaa8\") " pod="openstack/nova-api-115c-account-create-update-l4s9j" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.067888 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.067941 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.101546 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vltx8\" (UniqueName: \"kubernetes.io/projected/27a2af00-c296-4e60-849c-e0157763aaa8-kube-api-access-vltx8\") pod \"nova-api-115c-account-create-update-l4s9j\" (UID: \"27a2af00-c296-4e60-849c-e0157763aaa8\") " pod="openstack/nova-api-115c-account-create-update-l4s9j" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.122435 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a871-account-create-update-8tsff"] Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.133945 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.146784 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.149340 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lznw2\" (UniqueName: \"kubernetes.io/projected/10e51d4a-d9ff-4393-bb1f-5ad90c13096f-kube-api-access-lznw2\") pod \"nova-cell1-db-create-mkk4l\" (UID: \"10e51d4a-d9ff-4393-bb1f-5ad90c13096f\") " pod="openstack/nova-cell1-db-create-mkk4l" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.152250 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3068411c-0dc8-47e4-a58b-abf587764c20-operator-scripts\") pod \"nova-cell0-a871-account-create-update-8tsff\" (UID: \"3068411c-0dc8-47e4-a58b-abf587764c20\") " pod="openstack/nova-cell0-a871-account-create-update-8tsff" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.152588 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwjf7\" (UniqueName: \"kubernetes.io/projected/3068411c-0dc8-47e4-a58b-abf587764c20-kube-api-access-gwjf7\") pod \"nova-cell0-a871-account-create-update-8tsff\" (UID: \"3068411c-0dc8-47e4-a58b-abf587764c20\") " pod="openstack/nova-cell0-a871-account-create-update-8tsff" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.152688 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10e51d4a-d9ff-4393-bb1f-5ad90c13096f-operator-scripts\") pod \"nova-cell1-db-create-mkk4l\" (UID: \"10e51d4a-d9ff-4393-bb1f-5ad90c13096f\") " pod="openstack/nova-cell1-db-create-mkk4l" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.158010 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10e51d4a-d9ff-4393-bb1f-5ad90c13096f-operator-scripts\") pod \"nova-cell1-db-create-mkk4l\" (UID: \"10e51d4a-d9ff-4393-bb1f-5ad90c13096f\") " pod="openstack/nova-cell1-db-create-mkk4l" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.159312 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-115c-account-create-update-l4s9j" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.216619 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lznw2\" (UniqueName: \"kubernetes.io/projected/10e51d4a-d9ff-4393-bb1f-5ad90c13096f-kube-api-access-lznw2\") pod \"nova-cell1-db-create-mkk4l\" (UID: \"10e51d4a-d9ff-4393-bb1f-5ad90c13096f\") " pod="openstack/nova-cell1-db-create-mkk4l" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.260886 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwjf7\" (UniqueName: \"kubernetes.io/projected/3068411c-0dc8-47e4-a58b-abf587764c20-kube-api-access-gwjf7\") pod \"nova-cell0-a871-account-create-update-8tsff\" (UID: \"3068411c-0dc8-47e4-a58b-abf587764c20\") " pod="openstack/nova-cell0-a871-account-create-update-8tsff" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.261036 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3068411c-0dc8-47e4-a58b-abf587764c20-operator-scripts\") pod \"nova-cell0-a871-account-create-update-8tsff\" (UID: \"3068411c-0dc8-47e4-a58b-abf587764c20\") " pod="openstack/nova-cell0-a871-account-create-update-8tsff" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.261763 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3068411c-0dc8-47e4-a58b-abf587764c20-operator-scripts\") pod \"nova-cell0-a871-account-create-update-8tsff\" (UID: \"3068411c-0dc8-47e4-a58b-abf587764c20\") " pod="openstack/nova-cell0-a871-account-create-update-8tsff" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.289521 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwjf7\" (UniqueName: \"kubernetes.io/projected/3068411c-0dc8-47e4-a58b-abf587764c20-kube-api-access-gwjf7\") pod \"nova-cell0-a871-account-create-update-8tsff\" (UID: \"3068411c-0dc8-47e4-a58b-abf587764c20\") " pod="openstack/nova-cell0-a871-account-create-update-8tsff" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.326208 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e98e-account-create-update-77lrr"] Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.328781 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e98e-account-create-update-77lrr" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.332282 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.343371 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e98e-account-create-update-77lrr"] Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.364700 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7440d09-9e03-450e-bbda-f0680e63dac4-operator-scripts\") pod \"nova-cell1-e98e-account-create-update-77lrr\" (UID: \"b7440d09-9e03-450e-bbda-f0680e63dac4\") " pod="openstack/nova-cell1-e98e-account-create-update-77lrr" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.364852 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svq8d\" (UniqueName: \"kubernetes.io/projected/b7440d09-9e03-450e-bbda-f0680e63dac4-kube-api-access-svq8d\") pod \"nova-cell1-e98e-account-create-update-77lrr\" (UID: \"b7440d09-9e03-450e-bbda-f0680e63dac4\") " pod="openstack/nova-cell1-e98e-account-create-update-77lrr" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.414705 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mkk4l" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.421796 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a871-account-create-update-8tsff" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.466696 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svq8d\" (UniqueName: \"kubernetes.io/projected/b7440d09-9e03-450e-bbda-f0680e63dac4-kube-api-access-svq8d\") pod \"nova-cell1-e98e-account-create-update-77lrr\" (UID: \"b7440d09-9e03-450e-bbda-f0680e63dac4\") " pod="openstack/nova-cell1-e98e-account-create-update-77lrr" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.466800 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7440d09-9e03-450e-bbda-f0680e63dac4-operator-scripts\") pod \"nova-cell1-e98e-account-create-update-77lrr\" (UID: \"b7440d09-9e03-450e-bbda-f0680e63dac4\") " pod="openstack/nova-cell1-e98e-account-create-update-77lrr" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.467537 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7440d09-9e03-450e-bbda-f0680e63dac4-operator-scripts\") pod \"nova-cell1-e98e-account-create-update-77lrr\" (UID: \"b7440d09-9e03-450e-bbda-f0680e63dac4\") " pod="openstack/nova-cell1-e98e-account-create-update-77lrr" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.494032 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svq8d\" (UniqueName: \"kubernetes.io/projected/b7440d09-9e03-450e-bbda-f0680e63dac4-kube-api-access-svq8d\") pod \"nova-cell1-e98e-account-create-update-77lrr\" (UID: \"b7440d09-9e03-450e-bbda-f0680e63dac4\") " pod="openstack/nova-cell1-e98e-account-create-update-77lrr" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.506017 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vzlbg"] Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.700682 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e98e-account-create-update-77lrr" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.801379 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2q5qg"] Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.813222 4772 generic.go:334] "Generic (PLEG): container finished" podID="3a45fc37-a955-4087-aa75-d04a64407dae" containerID="d8632e84c17b30152bb522c389516d6f695613ae14302cee84f16b2bf422b778" exitCode=0 Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.813271 4772 generic.go:334] "Generic (PLEG): container finished" podID="3a45fc37-a955-4087-aa75-d04a64407dae" containerID="d6c90e9fa21b67d45d08fd162bc554079cdd101804ee7965544401092cdc3b08" exitCode=2 Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.813282 4772 generic.go:334] "Generic (PLEG): container finished" podID="3a45fc37-a955-4087-aa75-d04a64407dae" containerID="9dc891759d8541eb6994daeaf61943dea484ff7a4a8aebbb8c5ab31ccc8479c3" exitCode=0 Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.813296 4772 generic.go:334] "Generic (PLEG): container finished" podID="3a45fc37-a955-4087-aa75-d04a64407dae" containerID="5cd8d04da882b8bd3bbd1b40dc78ec60c6eacb1008431e64e36aa3bf20927683" exitCode=0 Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.813391 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a45fc37-a955-4087-aa75-d04a64407dae","Type":"ContainerDied","Data":"d8632e84c17b30152bb522c389516d6f695613ae14302cee84f16b2bf422b778"} Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.813432 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a45fc37-a955-4087-aa75-d04a64407dae","Type":"ContainerDied","Data":"d6c90e9fa21b67d45d08fd162bc554079cdd101804ee7965544401092cdc3b08"} Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.813447 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a45fc37-a955-4087-aa75-d04a64407dae","Type":"ContainerDied","Data":"9dc891759d8541eb6994daeaf61943dea484ff7a4a8aebbb8c5ab31ccc8479c3"} Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.813460 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a45fc37-a955-4087-aa75-d04a64407dae","Type":"ContainerDied","Data":"5cd8d04da882b8bd3bbd1b40dc78ec60c6eacb1008431e64e36aa3bf20927683"} Nov 28 11:26:34 crc kubenswrapper[4772]: W1128 11:26:34.817089 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1b7037_466b_4290_be1a_7ede41184bfc.slice/crio-78d0c34d225f0c8abc81970b7259f3a2e1bb91f19ff368e25ac8d1d869d4d2ad WatchSource:0}: Error finding container 78d0c34d225f0c8abc81970b7259f3a2e1bb91f19ff368e25ac8d1d869d4d2ad: Status 404 returned error can't find the container with id 78d0c34d225f0c8abc81970b7259f3a2e1bb91f19ff368e25ac8d1d869d4d2ad Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.834684 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vzlbg" event={"ID":"11fb5581-331a-4b2b-9cae-ca7679b297b2","Type":"ContainerStarted","Data":"f625f21b482d5da8de91d48c791cb70d0e598f1a0acb8d21f123465d38603982"} Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.835592 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.835637 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:34 crc kubenswrapper[4772]: I1128 11:26:34.870453 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-115c-account-create-update-l4s9j"] Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.048951 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.079707 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-combined-ca-bundle\") pod \"3a45fc37-a955-4087-aa75-d04a64407dae\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.079850 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-config-data\") pod \"3a45fc37-a955-4087-aa75-d04a64407dae\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.079905 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-sg-core-conf-yaml\") pod \"3a45fc37-a955-4087-aa75-d04a64407dae\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.079951 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a45fc37-a955-4087-aa75-d04a64407dae-run-httpd\") pod \"3a45fc37-a955-4087-aa75-d04a64407dae\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.080007 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a45fc37-a955-4087-aa75-d04a64407dae-log-httpd\") pod \"3a45fc37-a955-4087-aa75-d04a64407dae\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.080098 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-scripts\") pod \"3a45fc37-a955-4087-aa75-d04a64407dae\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.080135 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmfzn\" (UniqueName: \"kubernetes.io/projected/3a45fc37-a955-4087-aa75-d04a64407dae-kube-api-access-dmfzn\") pod \"3a45fc37-a955-4087-aa75-d04a64407dae\" (UID: \"3a45fc37-a955-4087-aa75-d04a64407dae\") " Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.080927 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a45fc37-a955-4087-aa75-d04a64407dae-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3a45fc37-a955-4087-aa75-d04a64407dae" (UID: "3a45fc37-a955-4087-aa75-d04a64407dae"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.081973 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a45fc37-a955-4087-aa75-d04a64407dae-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3a45fc37-a955-4087-aa75-d04a64407dae" (UID: "3a45fc37-a955-4087-aa75-d04a64407dae"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.100995 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-scripts" (OuterVolumeSpecName: "scripts") pod "3a45fc37-a955-4087-aa75-d04a64407dae" (UID: "3a45fc37-a955-4087-aa75-d04a64407dae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.126199 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a871-account-create-update-8tsff"] Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.127065 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a45fc37-a955-4087-aa75-d04a64407dae-kube-api-access-dmfzn" (OuterVolumeSpecName: "kube-api-access-dmfzn") pod "3a45fc37-a955-4087-aa75-d04a64407dae" (UID: "3a45fc37-a955-4087-aa75-d04a64407dae"). InnerVolumeSpecName "kube-api-access-dmfzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:35 crc kubenswrapper[4772]: W1128 11:26:35.171377 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10e51d4a_d9ff_4393_bb1f_5ad90c13096f.slice/crio-10e0c795d251199629715b348e09c7d08493db0acfb6b3a50932d3c77af8ef88 WatchSource:0}: Error finding container 10e0c795d251199629715b348e09c7d08493db0acfb6b3a50932d3c77af8ef88: Status 404 returned error can't find the container with id 10e0c795d251199629715b348e09c7d08493db0acfb6b3a50932d3c77af8ef88 Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.182107 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a45fc37-a955-4087-aa75-d04a64407dae-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.182176 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a45fc37-a955-4087-aa75-d04a64407dae-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.182189 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.182199 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmfzn\" (UniqueName: \"kubernetes.io/projected/3a45fc37-a955-4087-aa75-d04a64407dae-kube-api-access-dmfzn\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.185481 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3a45fc37-a955-4087-aa75-d04a64407dae" (UID: "3a45fc37-a955-4087-aa75-d04a64407dae"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.190054 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mkk4l"] Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.256472 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a45fc37-a955-4087-aa75-d04a64407dae" (UID: "3a45fc37-a955-4087-aa75-d04a64407dae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.280168 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e98e-account-create-update-77lrr"] Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.284274 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.284319 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.288520 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-config-data" (OuterVolumeSpecName: "config-data") pod "3a45fc37-a955-4087-aa75-d04a64407dae" (UID: "3a45fc37-a955-4087-aa75-d04a64407dae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:26:35 crc kubenswrapper[4772]: W1128 11:26:35.312512 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7440d09_9e03_450e_bbda_f0680e63dac4.slice/crio-0737dc68afaf5aefd144e53c5a6fcb8d5ff559c202842bee553963794602043b WatchSource:0}: Error finding container 0737dc68afaf5aefd144e53c5a6fcb8d5ff559c202842bee553963794602043b: Status 404 returned error can't find the container with id 0737dc68afaf5aefd144e53c5a6fcb8d5ff559c202842bee553963794602043b Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.387981 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a45fc37-a955-4087-aa75-d04a64407dae-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.412915 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.413084 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.466230 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.847251 4772 generic.go:334] "Generic (PLEG): container finished" podID="fe1b7037-466b-4290-be1a-7ede41184bfc" containerID="5941f2beaf68fdb40dd4f34ec18562017a951a423c67454ff3ae25516da8a83a" exitCode=0 Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.847392 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2q5qg" event={"ID":"fe1b7037-466b-4290-be1a-7ede41184bfc","Type":"ContainerDied","Data":"5941f2beaf68fdb40dd4f34ec18562017a951a423c67454ff3ae25516da8a83a"} Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.847436 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2q5qg" event={"ID":"fe1b7037-466b-4290-be1a-7ede41184bfc","Type":"ContainerStarted","Data":"78d0c34d225f0c8abc81970b7259f3a2e1bb91f19ff368e25ac8d1d869d4d2ad"} Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.848968 4772 generic.go:334] "Generic (PLEG): container finished" podID="3068411c-0dc8-47e4-a58b-abf587764c20" containerID="8e4f54838d407ea8bc59725e4884b847b2c9262e16c00908382b8b9774706ce5" exitCode=0 Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.849061 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a871-account-create-update-8tsff" event={"ID":"3068411c-0dc8-47e4-a58b-abf587764c20","Type":"ContainerDied","Data":"8e4f54838d407ea8bc59725e4884b847b2c9262e16c00908382b8b9774706ce5"} Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.849100 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a871-account-create-update-8tsff" event={"ID":"3068411c-0dc8-47e4-a58b-abf587764c20","Type":"ContainerStarted","Data":"d3d1d0dfd6e065ec0bbe24ce744df071a0a57989d76df497a1be07001fce6975"} Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.850531 4772 generic.go:334] "Generic (PLEG): container finished" podID="11fb5581-331a-4b2b-9cae-ca7679b297b2" containerID="6b9e227bfa469621cfb27e17895b3b47ccd09a46d896d36764f3034cdfc68e71" exitCode=0 Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.850623 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vzlbg" event={"ID":"11fb5581-331a-4b2b-9cae-ca7679b297b2","Type":"ContainerDied","Data":"6b9e227bfa469621cfb27e17895b3b47ccd09a46d896d36764f3034cdfc68e71"} Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.852007 4772 generic.go:334] "Generic (PLEG): container finished" podID="10e51d4a-d9ff-4393-bb1f-5ad90c13096f" containerID="6bdbccb0995d232a89d0137697602459ed8d809c0ebc5068cabe773bb605996e" exitCode=0 Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.852058 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mkk4l" event={"ID":"10e51d4a-d9ff-4393-bb1f-5ad90c13096f","Type":"ContainerDied","Data":"6bdbccb0995d232a89d0137697602459ed8d809c0ebc5068cabe773bb605996e"} Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.852074 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mkk4l" event={"ID":"10e51d4a-d9ff-4393-bb1f-5ad90c13096f","Type":"ContainerStarted","Data":"10e0c795d251199629715b348e09c7d08493db0acfb6b3a50932d3c77af8ef88"} Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.855104 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a45fc37-a955-4087-aa75-d04a64407dae","Type":"ContainerDied","Data":"995c19f63228d325004470bafdc8c4ba6dd7244035c66a9ae235ca56b8cee13e"} Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.855176 4772 scope.go:117] "RemoveContainer" containerID="d8632e84c17b30152bb522c389516d6f695613ae14302cee84f16b2bf422b778" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.855353 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.862476 4772 generic.go:334] "Generic (PLEG): container finished" podID="27a2af00-c296-4e60-849c-e0157763aaa8" containerID="c4556fed18bd7278f59379064d9e83ad1c101780f41a8d09b83f20f286f4d7a8" exitCode=0 Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.862647 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-115c-account-create-update-l4s9j" event={"ID":"27a2af00-c296-4e60-849c-e0157763aaa8","Type":"ContainerDied","Data":"c4556fed18bd7278f59379064d9e83ad1c101780f41a8d09b83f20f286f4d7a8"} Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.862681 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-115c-account-create-update-l4s9j" event={"ID":"27a2af00-c296-4e60-849c-e0157763aaa8","Type":"ContainerStarted","Data":"a2b24507385263bad6ffd668bd304621ccaa13ec81b9e2210e49fba27e0e0334"} Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.871979 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e98e-account-create-update-77lrr" event={"ID":"b7440d09-9e03-450e-bbda-f0680e63dac4","Type":"ContainerStarted","Data":"2b59dc554a6b2661925f76e51a84ab82ad708a154ae29c90beb48f9e058787a0"} Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.872323 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e98e-account-create-update-77lrr" event={"ID":"b7440d09-9e03-450e-bbda-f0680e63dac4","Type":"ContainerStarted","Data":"0737dc68afaf5aefd144e53c5a6fcb8d5ff559c202842bee553963794602043b"} Nov 28 11:26:35 crc kubenswrapper[4772]: I1128 11:26:35.895559 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-e98e-account-create-update-77lrr" podStartSLOduration=1.8955358960000002 podStartE2EDuration="1.895535896s" podCreationTimestamp="2025-11-28 11:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:26:35.888162544 +0000 UTC m=+1194.211405771" watchObservedRunningTime="2025-11-28 11:26:35.895535896 +0000 UTC m=+1194.218779123" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.116754 4772 scope.go:117] "RemoveContainer" containerID="d6c90e9fa21b67d45d08fd162bc554079cdd101804ee7965544401092cdc3b08" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.123594 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.129828 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.149890 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:36 crc kubenswrapper[4772]: E1128 11:26:36.150329 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="proxy-httpd" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.150346 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="proxy-httpd" Nov 28 11:26:36 crc kubenswrapper[4772]: E1128 11:26:36.150376 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="sg-core" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.150383 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="sg-core" Nov 28 11:26:36 crc kubenswrapper[4772]: E1128 11:26:36.150403 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="ceilometer-central-agent" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.150410 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="ceilometer-central-agent" Nov 28 11:26:36 crc kubenswrapper[4772]: E1128 11:26:36.150427 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="ceilometer-notification-agent" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.150433 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="ceilometer-notification-agent" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.150606 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="sg-core" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.150628 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="ceilometer-central-agent" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.150641 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="proxy-httpd" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.150652 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" containerName="ceilometer-notification-agent" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.152519 4772 scope.go:117] "RemoveContainer" containerID="9dc891759d8541eb6994daeaf61943dea484ff7a4a8aebbb8c5ab31ccc8479c3" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.153189 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.157153 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.157481 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.202720 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.205530 4772 scope.go:117] "RemoveContainer" containerID="5cd8d04da882b8bd3bbd1b40dc78ec60c6eacb1008431e64e36aa3bf20927683" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.314220 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9svr\" (UniqueName: \"kubernetes.io/projected/ffab9862-3ac8-4a03-8ac1-3935bbddc294-kube-api-access-p9svr\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.314318 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-scripts\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.314611 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.314740 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-config-data\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.314976 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffab9862-3ac8-4a03-8ac1-3935bbddc294-run-httpd\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.315106 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffab9862-3ac8-4a03-8ac1-3935bbddc294-log-httpd\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.315182 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.417989 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9svr\" (UniqueName: \"kubernetes.io/projected/ffab9862-3ac8-4a03-8ac1-3935bbddc294-kube-api-access-p9svr\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.418040 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-scripts\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.418096 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.418118 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-config-data\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.418150 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffab9862-3ac8-4a03-8ac1-3935bbddc294-run-httpd\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.418173 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffab9862-3ac8-4a03-8ac1-3935bbddc294-log-httpd\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.418198 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.419355 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffab9862-3ac8-4a03-8ac1-3935bbddc294-run-httpd\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.419419 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffab9862-3ac8-4a03-8ac1-3935bbddc294-log-httpd\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.426168 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.428184 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.431380 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-config-data\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.437118 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9svr\" (UniqueName: \"kubernetes.io/projected/ffab9862-3ac8-4a03-8ac1-3935bbddc294-kube-api-access-p9svr\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.440449 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-scripts\") pod \"ceilometer-0\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.493928 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.889819 4772 generic.go:334] "Generic (PLEG): container finished" podID="b7440d09-9e03-450e-bbda-f0680e63dac4" containerID="2b59dc554a6b2661925f76e51a84ab82ad708a154ae29c90beb48f9e058787a0" exitCode=0 Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.890059 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e98e-account-create-update-77lrr" event={"ID":"b7440d09-9e03-450e-bbda-f0680e63dac4","Type":"ContainerDied","Data":"2b59dc554a6b2661925f76e51a84ab82ad708a154ae29c90beb48f9e058787a0"} Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.966159 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.966323 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 11:26:36 crc kubenswrapper[4772]: I1128 11:26:36.985065 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.056558 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:26:37 crc kubenswrapper[4772]: W1128 11:26:37.057939 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffab9862_3ac8_4a03_8ac1_3935bbddc294.slice/crio-6753fa75f4c1d34c7198d3c135de78d8bc26e53b6874f2bcd857f08839e31469 WatchSource:0}: Error finding container 6753fa75f4c1d34c7198d3c135de78d8bc26e53b6874f2bcd857f08839e31469: Status 404 returned error can't find the container with id 6753fa75f4c1d34c7198d3c135de78d8bc26e53b6874f2bcd857f08839e31469 Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.345259 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a871-account-create-update-8tsff" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.448609 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwjf7\" (UniqueName: \"kubernetes.io/projected/3068411c-0dc8-47e4-a58b-abf587764c20-kube-api-access-gwjf7\") pod \"3068411c-0dc8-47e4-a58b-abf587764c20\" (UID: \"3068411c-0dc8-47e4-a58b-abf587764c20\") " Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.448664 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3068411c-0dc8-47e4-a58b-abf587764c20-operator-scripts\") pod \"3068411c-0dc8-47e4-a58b-abf587764c20\" (UID: \"3068411c-0dc8-47e4-a58b-abf587764c20\") " Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.449377 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3068411c-0dc8-47e4-a58b-abf587764c20-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3068411c-0dc8-47e4-a58b-abf587764c20" (UID: "3068411c-0dc8-47e4-a58b-abf587764c20"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.450043 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3068411c-0dc8-47e4-a58b-abf587764c20-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.456649 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3068411c-0dc8-47e4-a58b-abf587764c20-kube-api-access-gwjf7" (OuterVolumeSpecName: "kube-api-access-gwjf7") pod "3068411c-0dc8-47e4-a58b-abf587764c20" (UID: "3068411c-0dc8-47e4-a58b-abf587764c20"). InnerVolumeSpecName "kube-api-access-gwjf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.518463 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-115c-account-create-update-l4s9j" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.528421 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mkk4l" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.539219 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2q5qg" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.552845 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwjf7\" (UniqueName: \"kubernetes.io/projected/3068411c-0dc8-47e4-a58b-abf587764c20-kube-api-access-gwjf7\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.560642 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vzlbg" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.654185 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe1b7037-466b-4290-be1a-7ede41184bfc-operator-scripts\") pod \"fe1b7037-466b-4290-be1a-7ede41184bfc\" (UID: \"fe1b7037-466b-4290-be1a-7ede41184bfc\") " Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.654274 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11fb5581-331a-4b2b-9cae-ca7679b297b2-operator-scripts\") pod \"11fb5581-331a-4b2b-9cae-ca7679b297b2\" (UID: \"11fb5581-331a-4b2b-9cae-ca7679b297b2\") " Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.654326 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s2qb\" (UniqueName: \"kubernetes.io/projected/fe1b7037-466b-4290-be1a-7ede41184bfc-kube-api-access-7s2qb\") pod \"fe1b7037-466b-4290-be1a-7ede41184bfc\" (UID: \"fe1b7037-466b-4290-be1a-7ede41184bfc\") " Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.654423 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h57t\" (UniqueName: \"kubernetes.io/projected/11fb5581-331a-4b2b-9cae-ca7679b297b2-kube-api-access-4h57t\") pod \"11fb5581-331a-4b2b-9cae-ca7679b297b2\" (UID: \"11fb5581-331a-4b2b-9cae-ca7679b297b2\") " Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.654536 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10e51d4a-d9ff-4393-bb1f-5ad90c13096f-operator-scripts\") pod \"10e51d4a-d9ff-4393-bb1f-5ad90c13096f\" (UID: \"10e51d4a-d9ff-4393-bb1f-5ad90c13096f\") " Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.654567 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27a2af00-c296-4e60-849c-e0157763aaa8-operator-scripts\") pod \"27a2af00-c296-4e60-849c-e0157763aaa8\" (UID: \"27a2af00-c296-4e60-849c-e0157763aaa8\") " Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.654738 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vltx8\" (UniqueName: \"kubernetes.io/projected/27a2af00-c296-4e60-849c-e0157763aaa8-kube-api-access-vltx8\") pod \"27a2af00-c296-4e60-849c-e0157763aaa8\" (UID: \"27a2af00-c296-4e60-849c-e0157763aaa8\") " Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.654766 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lznw2\" (UniqueName: \"kubernetes.io/projected/10e51d4a-d9ff-4393-bb1f-5ad90c13096f-kube-api-access-lznw2\") pod \"10e51d4a-d9ff-4393-bb1f-5ad90c13096f\" (UID: \"10e51d4a-d9ff-4393-bb1f-5ad90c13096f\") " Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.656730 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe1b7037-466b-4290-be1a-7ede41184bfc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe1b7037-466b-4290-be1a-7ede41184bfc" (UID: "fe1b7037-466b-4290-be1a-7ede41184bfc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.657078 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11fb5581-331a-4b2b-9cae-ca7679b297b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11fb5581-331a-4b2b-9cae-ca7679b297b2" (UID: "11fb5581-331a-4b2b-9cae-ca7679b297b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.657772 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a2af00-c296-4e60-849c-e0157763aaa8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27a2af00-c296-4e60-849c-e0157763aaa8" (UID: "27a2af00-c296-4e60-849c-e0157763aaa8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.658175 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10e51d4a-d9ff-4393-bb1f-5ad90c13096f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10e51d4a-d9ff-4393-bb1f-5ad90c13096f" (UID: "10e51d4a-d9ff-4393-bb1f-5ad90c13096f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.659381 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10e51d4a-d9ff-4393-bb1f-5ad90c13096f-kube-api-access-lznw2" (OuterVolumeSpecName: "kube-api-access-lznw2") pod "10e51d4a-d9ff-4393-bb1f-5ad90c13096f" (UID: "10e51d4a-d9ff-4393-bb1f-5ad90c13096f"). InnerVolumeSpecName "kube-api-access-lznw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.662729 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1b7037-466b-4290-be1a-7ede41184bfc-kube-api-access-7s2qb" (OuterVolumeSpecName: "kube-api-access-7s2qb") pod "fe1b7037-466b-4290-be1a-7ede41184bfc" (UID: "fe1b7037-466b-4290-be1a-7ede41184bfc"). InnerVolumeSpecName "kube-api-access-7s2qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.664663 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a2af00-c296-4e60-849c-e0157763aaa8-kube-api-access-vltx8" (OuterVolumeSpecName: "kube-api-access-vltx8") pod "27a2af00-c296-4e60-849c-e0157763aaa8" (UID: "27a2af00-c296-4e60-849c-e0157763aaa8"). InnerVolumeSpecName "kube-api-access-vltx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.669696 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11fb5581-331a-4b2b-9cae-ca7679b297b2-kube-api-access-4h57t" (OuterVolumeSpecName: "kube-api-access-4h57t") pod "11fb5581-331a-4b2b-9cae-ca7679b297b2" (UID: "11fb5581-331a-4b2b-9cae-ca7679b297b2"). InnerVolumeSpecName "kube-api-access-4h57t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.758633 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe1b7037-466b-4290-be1a-7ede41184bfc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.758936 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11fb5581-331a-4b2b-9cae-ca7679b297b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.758947 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s2qb\" (UniqueName: \"kubernetes.io/projected/fe1b7037-466b-4290-be1a-7ede41184bfc-kube-api-access-7s2qb\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.758960 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h57t\" (UniqueName: \"kubernetes.io/projected/11fb5581-331a-4b2b-9cae-ca7679b297b2-kube-api-access-4h57t\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.758970 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10e51d4a-d9ff-4393-bb1f-5ad90c13096f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.758980 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27a2af00-c296-4e60-849c-e0157763aaa8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.758990 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vltx8\" (UniqueName: \"kubernetes.io/projected/27a2af00-c296-4e60-849c-e0157763aaa8-kube-api-access-vltx8\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.758998 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lznw2\" (UniqueName: \"kubernetes.io/projected/10e51d4a-d9ff-4393-bb1f-5ad90c13096f-kube-api-access-lznw2\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.905809 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2q5qg" event={"ID":"fe1b7037-466b-4290-be1a-7ede41184bfc","Type":"ContainerDied","Data":"78d0c34d225f0c8abc81970b7259f3a2e1bb91f19ff368e25ac8d1d869d4d2ad"} Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.905868 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78d0c34d225f0c8abc81970b7259f3a2e1bb91f19ff368e25ac8d1d869d4d2ad" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.905942 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2q5qg" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.919444 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a871-account-create-update-8tsff" event={"ID":"3068411c-0dc8-47e4-a58b-abf587764c20","Type":"ContainerDied","Data":"d3d1d0dfd6e065ec0bbe24ce744df071a0a57989d76df497a1be07001fce6975"} Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.919488 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a871-account-create-update-8tsff" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.919493 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3d1d0dfd6e065ec0bbe24ce744df071a0a57989d76df497a1be07001fce6975" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.922178 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vzlbg" event={"ID":"11fb5581-331a-4b2b-9cae-ca7679b297b2","Type":"ContainerDied","Data":"f625f21b482d5da8de91d48c791cb70d0e598f1a0acb8d21f123465d38603982"} Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.922276 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f625f21b482d5da8de91d48c791cb70d0e598f1a0acb8d21f123465d38603982" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.922458 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vzlbg" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.940678 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mkk4l" event={"ID":"10e51d4a-d9ff-4393-bb1f-5ad90c13096f","Type":"ContainerDied","Data":"10e0c795d251199629715b348e09c7d08493db0acfb6b3a50932d3c77af8ef88"} Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.940728 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10e0c795d251199629715b348e09c7d08493db0acfb6b3a50932d3c77af8ef88" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.940846 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mkk4l" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.943270 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-115c-account-create-update-l4s9j" event={"ID":"27a2af00-c296-4e60-849c-e0157763aaa8","Type":"ContainerDied","Data":"a2b24507385263bad6ffd668bd304621ccaa13ec81b9e2210e49fba27e0e0334"} Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.943296 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2b24507385263bad6ffd668bd304621ccaa13ec81b9e2210e49fba27e0e0334" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.943373 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-115c-account-create-update-l4s9j" Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.947888 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffab9862-3ac8-4a03-8ac1-3935bbddc294","Type":"ContainerStarted","Data":"7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265"} Nov 28 11:26:37 crc kubenswrapper[4772]: I1128 11:26:37.947949 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffab9862-3ac8-4a03-8ac1-3935bbddc294","Type":"ContainerStarted","Data":"6753fa75f4c1d34c7198d3c135de78d8bc26e53b6874f2bcd857f08839e31469"} Nov 28 11:26:38 crc kubenswrapper[4772]: I1128 11:26:38.015203 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a45fc37-a955-4087-aa75-d04a64407dae" path="/var/lib/kubelet/pods/3a45fc37-a955-4087-aa75-d04a64407dae/volumes" Nov 28 11:26:38 crc kubenswrapper[4772]: I1128 11:26:38.359351 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e98e-account-create-update-77lrr" Nov 28 11:26:38 crc kubenswrapper[4772]: I1128 11:26:38.488905 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svq8d\" (UniqueName: \"kubernetes.io/projected/b7440d09-9e03-450e-bbda-f0680e63dac4-kube-api-access-svq8d\") pod \"b7440d09-9e03-450e-bbda-f0680e63dac4\" (UID: \"b7440d09-9e03-450e-bbda-f0680e63dac4\") " Nov 28 11:26:38 crc kubenswrapper[4772]: I1128 11:26:38.489069 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7440d09-9e03-450e-bbda-f0680e63dac4-operator-scripts\") pod \"b7440d09-9e03-450e-bbda-f0680e63dac4\" (UID: \"b7440d09-9e03-450e-bbda-f0680e63dac4\") " Nov 28 11:26:38 crc kubenswrapper[4772]: I1128 11:26:38.490323 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7440d09-9e03-450e-bbda-f0680e63dac4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7440d09-9e03-450e-bbda-f0680e63dac4" (UID: "b7440d09-9e03-450e-bbda-f0680e63dac4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:26:38 crc kubenswrapper[4772]: I1128 11:26:38.494344 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7440d09-9e03-450e-bbda-f0680e63dac4-kube-api-access-svq8d" (OuterVolumeSpecName: "kube-api-access-svq8d") pod "b7440d09-9e03-450e-bbda-f0680e63dac4" (UID: "b7440d09-9e03-450e-bbda-f0680e63dac4"). InnerVolumeSpecName "kube-api-access-svq8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:26:38 crc kubenswrapper[4772]: I1128 11:26:38.591540 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7440d09-9e03-450e-bbda-f0680e63dac4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:38 crc kubenswrapper[4772]: I1128 11:26:38.591577 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svq8d\" (UniqueName: \"kubernetes.io/projected/b7440d09-9e03-450e-bbda-f0680e63dac4-kube-api-access-svq8d\") on node \"crc\" DevicePath \"\"" Nov 28 11:26:38 crc kubenswrapper[4772]: I1128 11:26:38.962220 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e98e-account-create-update-77lrr" Nov 28 11:26:38 crc kubenswrapper[4772]: I1128 11:26:38.962134 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e98e-account-create-update-77lrr" event={"ID":"b7440d09-9e03-450e-bbda-f0680e63dac4","Type":"ContainerDied","Data":"0737dc68afaf5aefd144e53c5a6fcb8d5ff559c202842bee553963794602043b"} Nov 28 11:26:38 crc kubenswrapper[4772]: I1128 11:26:38.962559 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0737dc68afaf5aefd144e53c5a6fcb8d5ff559c202842bee553963794602043b" Nov 28 11:26:38 crc kubenswrapper[4772]: I1128 11:26:38.964512 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffab9862-3ac8-4a03-8ac1-3935bbddc294","Type":"ContainerStarted","Data":"3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de"} Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.334931 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8jh8h"] Nov 28 11:26:39 crc kubenswrapper[4772]: E1128 11:26:39.335976 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a2af00-c296-4e60-849c-e0157763aaa8" containerName="mariadb-account-create-update" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.336001 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a2af00-c296-4e60-849c-e0157763aaa8" containerName="mariadb-account-create-update" Nov 28 11:26:39 crc kubenswrapper[4772]: E1128 11:26:39.336045 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3068411c-0dc8-47e4-a58b-abf587764c20" containerName="mariadb-account-create-update" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.336053 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3068411c-0dc8-47e4-a58b-abf587764c20" containerName="mariadb-account-create-update" Nov 28 11:26:39 crc kubenswrapper[4772]: E1128 11:26:39.336073 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e51d4a-d9ff-4393-bb1f-5ad90c13096f" containerName="mariadb-database-create" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.336085 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e51d4a-d9ff-4393-bb1f-5ad90c13096f" containerName="mariadb-database-create" Nov 28 11:26:39 crc kubenswrapper[4772]: E1128 11:26:39.336097 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11fb5581-331a-4b2b-9cae-ca7679b297b2" containerName="mariadb-database-create" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.336103 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="11fb5581-331a-4b2b-9cae-ca7679b297b2" containerName="mariadb-database-create" Nov 28 11:26:39 crc kubenswrapper[4772]: E1128 11:26:39.336112 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1b7037-466b-4290-be1a-7ede41184bfc" containerName="mariadb-database-create" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.336118 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1b7037-466b-4290-be1a-7ede41184bfc" containerName="mariadb-database-create" Nov 28 11:26:39 crc kubenswrapper[4772]: E1128 11:26:39.336126 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7440d09-9e03-450e-bbda-f0680e63dac4" containerName="mariadb-account-create-update" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.336133 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7440d09-9e03-450e-bbda-f0680e63dac4" containerName="mariadb-account-create-update" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.336323 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a2af00-c296-4e60-849c-e0157763aaa8" containerName="mariadb-account-create-update" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.336339 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3068411c-0dc8-47e4-a58b-abf587764c20" containerName="mariadb-account-create-update" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.336346 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e51d4a-d9ff-4393-bb1f-5ad90c13096f" containerName="mariadb-database-create" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.336385 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7440d09-9e03-450e-bbda-f0680e63dac4" containerName="mariadb-account-create-update" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.336397 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="11fb5581-331a-4b2b-9cae-ca7679b297b2" containerName="mariadb-database-create" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.336406 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1b7037-466b-4290-be1a-7ede41184bfc" containerName="mariadb-database-create" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.337321 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.341522 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mbnj7" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.341668 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.343015 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.358144 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8jh8h"] Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.407731 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8jh8h\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.407850 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-scripts\") pod \"nova-cell0-conductor-db-sync-8jh8h\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.408162 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx5j9\" (UniqueName: \"kubernetes.io/projected/8efe8b01-e196-469d-b817-4864b4de95d4-kube-api-access-hx5j9\") pod \"nova-cell0-conductor-db-sync-8jh8h\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.408332 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-config-data\") pod \"nova-cell0-conductor-db-sync-8jh8h\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.510128 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8jh8h\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.510211 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-scripts\") pod \"nova-cell0-conductor-db-sync-8jh8h\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.510284 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx5j9\" (UniqueName: \"kubernetes.io/projected/8efe8b01-e196-469d-b817-4864b4de95d4-kube-api-access-hx5j9\") pod \"nova-cell0-conductor-db-sync-8jh8h\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.510334 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-config-data\") pod \"nova-cell0-conductor-db-sync-8jh8h\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.515883 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-scripts\") pod \"nova-cell0-conductor-db-sync-8jh8h\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.516550 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8jh8h\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.518581 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-config-data\") pod \"nova-cell0-conductor-db-sync-8jh8h\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.532309 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx5j9\" (UniqueName: \"kubernetes.io/projected/8efe8b01-e196-469d-b817-4864b4de95d4-kube-api-access-hx5j9\") pod \"nova-cell0-conductor-db-sync-8jh8h\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.660757 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:26:39 crc kubenswrapper[4772]: I1128 11:26:39.984231 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffab9862-3ac8-4a03-8ac1-3935bbddc294","Type":"ContainerStarted","Data":"850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac"} Nov 28 11:26:40 crc kubenswrapper[4772]: I1128 11:26:40.638118 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8jh8h"] Nov 28 11:26:40 crc kubenswrapper[4772]: I1128 11:26:40.995461 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8jh8h" event={"ID":"8efe8b01-e196-469d-b817-4864b4de95d4","Type":"ContainerStarted","Data":"4af923253eee3749cb2b7361f1567f3ce1ad77d1a72a8c76a08a855c45468bcb"} Nov 28 11:26:42 crc kubenswrapper[4772]: I1128 11:26:42.014468 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffab9862-3ac8-4a03-8ac1-3935bbddc294","Type":"ContainerStarted","Data":"007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d"} Nov 28 11:26:42 crc kubenswrapper[4772]: I1128 11:26:42.015111 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 11:26:42 crc kubenswrapper[4772]: I1128 11:26:42.092183 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.915299082 podStartE2EDuration="6.092152517s" podCreationTimestamp="2025-11-28 11:26:36 +0000 UTC" firstStartedPulling="2025-11-28 11:26:37.064217251 +0000 UTC m=+1195.387460478" lastFinishedPulling="2025-11-28 11:26:41.241070686 +0000 UTC m=+1199.564313913" observedRunningTime="2025-11-28 11:26:42.082047974 +0000 UTC m=+1200.405291201" watchObservedRunningTime="2025-11-28 11:26:42.092152517 +0000 UTC m=+1200.415395744" Nov 28 11:26:51 crc kubenswrapper[4772]: I1128 11:26:51.180754 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8jh8h" event={"ID":"8efe8b01-e196-469d-b817-4864b4de95d4","Type":"ContainerStarted","Data":"9fc79052759f344a4d0f69e253d2282dd38880bee86bf1f1f7a06e96d3e72b59"} Nov 28 11:26:51 crc kubenswrapper[4772]: I1128 11:26:51.218619 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-8jh8h" podStartSLOduration=2.47886739 podStartE2EDuration="12.218584507s" podCreationTimestamp="2025-11-28 11:26:39 +0000 UTC" firstStartedPulling="2025-11-28 11:26:40.666026266 +0000 UTC m=+1198.989269493" lastFinishedPulling="2025-11-28 11:26:50.405743383 +0000 UTC m=+1208.728986610" observedRunningTime="2025-11-28 11:26:51.214045149 +0000 UTC m=+1209.537288406" watchObservedRunningTime="2025-11-28 11:26:51.218584507 +0000 UTC m=+1209.541827774" Nov 28 11:26:53 crc kubenswrapper[4772]: I1128 11:26:53.896616 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:26:53 crc kubenswrapper[4772]: I1128 11:26:53.897403 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:27:02 crc kubenswrapper[4772]: I1128 11:27:02.347049 4772 generic.go:334] "Generic (PLEG): container finished" podID="8efe8b01-e196-469d-b817-4864b4de95d4" containerID="9fc79052759f344a4d0f69e253d2282dd38880bee86bf1f1f7a06e96d3e72b59" exitCode=0 Nov 28 11:27:02 crc kubenswrapper[4772]: I1128 11:27:02.347155 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8jh8h" event={"ID":"8efe8b01-e196-469d-b817-4864b4de95d4","Type":"ContainerDied","Data":"9fc79052759f344a4d0f69e253d2282dd38880bee86bf1f1f7a06e96d3e72b59"} Nov 28 11:27:03 crc kubenswrapper[4772]: I1128 11:27:03.762564 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:27:03 crc kubenswrapper[4772]: I1128 11:27:03.920394 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx5j9\" (UniqueName: \"kubernetes.io/projected/8efe8b01-e196-469d-b817-4864b4de95d4-kube-api-access-hx5j9\") pod \"8efe8b01-e196-469d-b817-4864b4de95d4\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " Nov 28 11:27:03 crc kubenswrapper[4772]: I1128 11:27:03.920577 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-scripts\") pod \"8efe8b01-e196-469d-b817-4864b4de95d4\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " Nov 28 11:27:03 crc kubenswrapper[4772]: I1128 11:27:03.920612 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-config-data\") pod \"8efe8b01-e196-469d-b817-4864b4de95d4\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " Nov 28 11:27:03 crc kubenswrapper[4772]: I1128 11:27:03.920699 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-combined-ca-bundle\") pod \"8efe8b01-e196-469d-b817-4864b4de95d4\" (UID: \"8efe8b01-e196-469d-b817-4864b4de95d4\") " Nov 28 11:27:03 crc kubenswrapper[4772]: I1128 11:27:03.928394 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-scripts" (OuterVolumeSpecName: "scripts") pod "8efe8b01-e196-469d-b817-4864b4de95d4" (UID: "8efe8b01-e196-469d-b817-4864b4de95d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:03 crc kubenswrapper[4772]: I1128 11:27:03.933591 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8efe8b01-e196-469d-b817-4864b4de95d4-kube-api-access-hx5j9" (OuterVolumeSpecName: "kube-api-access-hx5j9") pod "8efe8b01-e196-469d-b817-4864b4de95d4" (UID: "8efe8b01-e196-469d-b817-4864b4de95d4"). InnerVolumeSpecName "kube-api-access-hx5j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:27:03 crc kubenswrapper[4772]: I1128 11:27:03.955588 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-config-data" (OuterVolumeSpecName: "config-data") pod "8efe8b01-e196-469d-b817-4864b4de95d4" (UID: "8efe8b01-e196-469d-b817-4864b4de95d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:03 crc kubenswrapper[4772]: I1128 11:27:03.960206 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8efe8b01-e196-469d-b817-4864b4de95d4" (UID: "8efe8b01-e196-469d-b817-4864b4de95d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.023862 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx5j9\" (UniqueName: \"kubernetes.io/projected/8efe8b01-e196-469d-b817-4864b4de95d4-kube-api-access-hx5j9\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.024174 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.024247 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.024325 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efe8b01-e196-469d-b817-4864b4de95d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.375219 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8jh8h" event={"ID":"8efe8b01-e196-469d-b817-4864b4de95d4","Type":"ContainerDied","Data":"4af923253eee3749cb2b7361f1567f3ce1ad77d1a72a8c76a08a855c45468bcb"} Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.376195 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4af923253eee3749cb2b7361f1567f3ce1ad77d1a72a8c76a08a855c45468bcb" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.375308 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8jh8h" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.622669 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 11:27:04 crc kubenswrapper[4772]: E1128 11:27:04.623417 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efe8b01-e196-469d-b817-4864b4de95d4" containerName="nova-cell0-conductor-db-sync" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.623453 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efe8b01-e196-469d-b817-4864b4de95d4" containerName="nova-cell0-conductor-db-sync" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.623906 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8efe8b01-e196-469d-b817-4864b4de95d4" containerName="nova-cell0-conductor-db-sync" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.625225 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.629583 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mbnj7" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.630828 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.642778 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.643058 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqc99\" (UniqueName: \"kubernetes.io/projected/367a81eb-3924-42ce-8fcf-258e2ad0b494-kube-api-access-tqc99\") pod \"nova-cell0-conductor-0\" (UID: \"367a81eb-3924-42ce-8fcf-258e2ad0b494\") " pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.643228 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367a81eb-3924-42ce-8fcf-258e2ad0b494-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"367a81eb-3924-42ce-8fcf-258e2ad0b494\") " pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.643310 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367a81eb-3924-42ce-8fcf-258e2ad0b494-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"367a81eb-3924-42ce-8fcf-258e2ad0b494\") " pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.745461 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367a81eb-3924-42ce-8fcf-258e2ad0b494-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"367a81eb-3924-42ce-8fcf-258e2ad0b494\") " pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.745609 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqc99\" (UniqueName: \"kubernetes.io/projected/367a81eb-3924-42ce-8fcf-258e2ad0b494-kube-api-access-tqc99\") pod \"nova-cell0-conductor-0\" (UID: \"367a81eb-3924-42ce-8fcf-258e2ad0b494\") " pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.745739 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367a81eb-3924-42ce-8fcf-258e2ad0b494-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"367a81eb-3924-42ce-8fcf-258e2ad0b494\") " pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.753207 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367a81eb-3924-42ce-8fcf-258e2ad0b494-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"367a81eb-3924-42ce-8fcf-258e2ad0b494\") " pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.759392 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367a81eb-3924-42ce-8fcf-258e2ad0b494-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"367a81eb-3924-42ce-8fcf-258e2ad0b494\") " pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.764220 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqc99\" (UniqueName: \"kubernetes.io/projected/367a81eb-3924-42ce-8fcf-258e2ad0b494-kube-api-access-tqc99\") pod \"nova-cell0-conductor-0\" (UID: \"367a81eb-3924-42ce-8fcf-258e2ad0b494\") " pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:04 crc kubenswrapper[4772]: I1128 11:27:04.947894 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:05 crc kubenswrapper[4772]: I1128 11:27:05.473563 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 11:27:06 crc kubenswrapper[4772]: I1128 11:27:06.406499 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"367a81eb-3924-42ce-8fcf-258e2ad0b494","Type":"ContainerStarted","Data":"e8c85c6cdbcd28d3d6d0cba4186437071de9d58909f578a5fa3fa401c87df080"} Nov 28 11:27:06 crc kubenswrapper[4772]: I1128 11:27:06.407183 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:06 crc kubenswrapper[4772]: I1128 11:27:06.407205 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"367a81eb-3924-42ce-8fcf-258e2ad0b494","Type":"ContainerStarted","Data":"35bd94a9efd4deb7ac7328d66f7c68e654c94e5239977cc34c6e51371d84d0d9"} Nov 28 11:27:06 crc kubenswrapper[4772]: I1128 11:27:06.502494 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 11:27:06 crc kubenswrapper[4772]: I1128 11:27:06.544700 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.544653937 podStartE2EDuration="2.544653937s" podCreationTimestamp="2025-11-28 11:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:27:06.437960608 +0000 UTC m=+1224.761203845" watchObservedRunningTime="2025-11-28 11:27:06.544653937 +0000 UTC m=+1224.867897164" Nov 28 11:27:10 crc kubenswrapper[4772]: I1128 11:27:10.220834 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 11:27:10 crc kubenswrapper[4772]: I1128 11:27:10.221455 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="064be676-f5a5-4ae0-9fce-c2103f169de8" containerName="kube-state-metrics" containerID="cri-o://e612c1a473ab5febd45a9d07b045fa1d6b2307e23acc0effe8dcfe569bc1a8b0" gracePeriod=30 Nov 28 11:27:10 crc kubenswrapper[4772]: I1128 11:27:10.474899 4772 generic.go:334] "Generic (PLEG): container finished" podID="064be676-f5a5-4ae0-9fce-c2103f169de8" containerID="e612c1a473ab5febd45a9d07b045fa1d6b2307e23acc0effe8dcfe569bc1a8b0" exitCode=2 Nov 28 11:27:10 crc kubenswrapper[4772]: I1128 11:27:10.474953 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"064be676-f5a5-4ae0-9fce-c2103f169de8","Type":"ContainerDied","Data":"e612c1a473ab5febd45a9d07b045fa1d6b2307e23acc0effe8dcfe569bc1a8b0"} Nov 28 11:27:10 crc kubenswrapper[4772]: I1128 11:27:10.701652 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 11:27:10 crc kubenswrapper[4772]: I1128 11:27:10.895851 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65qzq\" (UniqueName: \"kubernetes.io/projected/064be676-f5a5-4ae0-9fce-c2103f169de8-kube-api-access-65qzq\") pod \"064be676-f5a5-4ae0-9fce-c2103f169de8\" (UID: \"064be676-f5a5-4ae0-9fce-c2103f169de8\") " Nov 28 11:27:10 crc kubenswrapper[4772]: I1128 11:27:10.903894 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064be676-f5a5-4ae0-9fce-c2103f169de8-kube-api-access-65qzq" (OuterVolumeSpecName: "kube-api-access-65qzq") pod "064be676-f5a5-4ae0-9fce-c2103f169de8" (UID: "064be676-f5a5-4ae0-9fce-c2103f169de8"). InnerVolumeSpecName "kube-api-access-65qzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:27:10 crc kubenswrapper[4772]: I1128 11:27:10.997864 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65qzq\" (UniqueName: \"kubernetes.io/projected/064be676-f5a5-4ae0-9fce-c2103f169de8-kube-api-access-65qzq\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.488656 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"064be676-f5a5-4ae0-9fce-c2103f169de8","Type":"ContainerDied","Data":"3fd9df72666bc0899509b8417fe31e14b1cc5a97d185f27d897b074dabbcf683"} Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.489080 4772 scope.go:117] "RemoveContainer" containerID="e612c1a473ab5febd45a9d07b045fa1d6b2307e23acc0effe8dcfe569bc1a8b0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.489264 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.539776 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.555571 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.572629 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 11:27:11 crc kubenswrapper[4772]: E1128 11:27:11.573312 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064be676-f5a5-4ae0-9fce-c2103f169de8" containerName="kube-state-metrics" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.573342 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="064be676-f5a5-4ae0-9fce-c2103f169de8" containerName="kube-state-metrics" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.573685 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="064be676-f5a5-4ae0-9fce-c2103f169de8" containerName="kube-state-metrics" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.577129 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.582157 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.584818 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.588815 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.725793 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bfc712b-ffa2-4fc0-825c-def988a3f1b2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3bfc712b-ffa2-4fc0-825c-def988a3f1b2\") " pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.725961 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8sb9\" (UniqueName: \"kubernetes.io/projected/3bfc712b-ffa2-4fc0-825c-def988a3f1b2-kube-api-access-l8sb9\") pod \"kube-state-metrics-0\" (UID: \"3bfc712b-ffa2-4fc0-825c-def988a3f1b2\") " pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.726074 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bfc712b-ffa2-4fc0-825c-def988a3f1b2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3bfc712b-ffa2-4fc0-825c-def988a3f1b2\") " pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.726222 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3bfc712b-ffa2-4fc0-825c-def988a3f1b2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3bfc712b-ffa2-4fc0-825c-def988a3f1b2\") " pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.828352 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bfc712b-ffa2-4fc0-825c-def988a3f1b2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3bfc712b-ffa2-4fc0-825c-def988a3f1b2\") " pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.828797 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3bfc712b-ffa2-4fc0-825c-def988a3f1b2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3bfc712b-ffa2-4fc0-825c-def988a3f1b2\") " pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.829840 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bfc712b-ffa2-4fc0-825c-def988a3f1b2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3bfc712b-ffa2-4fc0-825c-def988a3f1b2\") " pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.830020 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8sb9\" (UniqueName: \"kubernetes.io/projected/3bfc712b-ffa2-4fc0-825c-def988a3f1b2-kube-api-access-l8sb9\") pod \"kube-state-metrics-0\" (UID: \"3bfc712b-ffa2-4fc0-825c-def988a3f1b2\") " pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.838422 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bfc712b-ffa2-4fc0-825c-def988a3f1b2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3bfc712b-ffa2-4fc0-825c-def988a3f1b2\") " pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.837883 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bfc712b-ffa2-4fc0-825c-def988a3f1b2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3bfc712b-ffa2-4fc0-825c-def988a3f1b2\") " pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.847081 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3bfc712b-ffa2-4fc0-825c-def988a3f1b2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3bfc712b-ffa2-4fc0-825c-def988a3f1b2\") " pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.867400 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8sb9\" (UniqueName: \"kubernetes.io/projected/3bfc712b-ffa2-4fc0-825c-def988a3f1b2-kube-api-access-l8sb9\") pod \"kube-state-metrics-0\" (UID: \"3bfc712b-ffa2-4fc0-825c-def988a3f1b2\") " pod="openstack/kube-state-metrics-0" Nov 28 11:27:11 crc kubenswrapper[4772]: I1128 11:27:11.897107 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.006789 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064be676-f5a5-4ae0-9fce-c2103f169de8" path="/var/lib/kubelet/pods/064be676-f5a5-4ae0-9fce-c2103f169de8/volumes" Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.242203 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.242580 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="ceilometer-central-agent" containerID="cri-o://7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265" gracePeriod=30 Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.242855 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="proxy-httpd" containerID="cri-o://007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d" gracePeriod=30 Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.243044 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="ceilometer-notification-agent" containerID="cri-o://3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de" gracePeriod=30 Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.243118 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="sg-core" containerID="cri-o://850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac" gracePeriod=30 Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.426193 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.439800 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.505031 4772 generic.go:334] "Generic (PLEG): container finished" podID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerID="007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d" exitCode=0 Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.505082 4772 generic.go:334] "Generic (PLEG): container finished" podID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerID="850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac" exitCode=2 Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.505219 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffab9862-3ac8-4a03-8ac1-3935bbddc294","Type":"ContainerDied","Data":"007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d"} Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.505263 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffab9862-3ac8-4a03-8ac1-3935bbddc294","Type":"ContainerDied","Data":"850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac"} Nov 28 11:27:12 crc kubenswrapper[4772]: I1128 11:27:12.508693 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3bfc712b-ffa2-4fc0-825c-def988a3f1b2","Type":"ContainerStarted","Data":"8b14b4f81e2514a4ed766269333c29f9afc10a7eac8a3832476b81ddfbbcea1f"} Nov 28 11:27:13 crc kubenswrapper[4772]: I1128 11:27:13.526464 4772 generic.go:334] "Generic (PLEG): container finished" podID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerID="7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265" exitCode=0 Nov 28 11:27:13 crc kubenswrapper[4772]: I1128 11:27:13.526546 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffab9862-3ac8-4a03-8ac1-3935bbddc294","Type":"ContainerDied","Data":"7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265"} Nov 28 11:27:13 crc kubenswrapper[4772]: I1128 11:27:13.531436 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3bfc712b-ffa2-4fc0-825c-def988a3f1b2","Type":"ContainerStarted","Data":"e09ec4021a3834aacfb83c7ee13d2471f223f71385226b41fd787c7b4943ee9c"} Nov 28 11:27:13 crc kubenswrapper[4772]: I1128 11:27:13.532620 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 11:27:13 crc kubenswrapper[4772]: I1128 11:27:13.557425 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.194095168 podStartE2EDuration="2.557405663s" podCreationTimestamp="2025-11-28 11:27:11 +0000 UTC" firstStartedPulling="2025-11-28 11:27:12.439523755 +0000 UTC m=+1230.762766982" lastFinishedPulling="2025-11-28 11:27:12.80283425 +0000 UTC m=+1231.126077477" observedRunningTime="2025-11-28 11:27:13.551588713 +0000 UTC m=+1231.874831940" watchObservedRunningTime="2025-11-28 11:27:13.557405663 +0000 UTC m=+1231.880648880" Nov 28 11:27:14 crc kubenswrapper[4772]: I1128 11:27:14.994392 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.471737 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-68hmw"] Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.473699 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.477213 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.477407 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.484249 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-68hmw"] Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.534647 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-config-data\") pod \"nova-cell0-cell-mapping-68hmw\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.534762 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-68hmw\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.534821 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-scripts\") pod \"nova-cell0-cell-mapping-68hmw\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.535093 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkghb\" (UniqueName: \"kubernetes.io/projected/e87f3486-967b-47b7-87ff-b3c11d66e63c-kube-api-access-bkghb\") pod \"nova-cell0-cell-mapping-68hmw\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.636047 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-scripts\") pod \"nova-cell0-cell-mapping-68hmw\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.636138 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkghb\" (UniqueName: \"kubernetes.io/projected/e87f3486-967b-47b7-87ff-b3c11d66e63c-kube-api-access-bkghb\") pod \"nova-cell0-cell-mapping-68hmw\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.636240 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-config-data\") pod \"nova-cell0-cell-mapping-68hmw\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.636393 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-68hmw\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.649006 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-scripts\") pod \"nova-cell0-cell-mapping-68hmw\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.668127 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-68hmw\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.673095 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-config-data\") pod \"nova-cell0-cell-mapping-68hmw\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.688236 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkghb\" (UniqueName: \"kubernetes.io/projected/e87f3486-967b-47b7-87ff-b3c11d66e63c-kube-api-access-bkghb\") pod \"nova-cell0-cell-mapping-68hmw\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.735646 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.737539 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.744639 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.762395 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.762587 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-config-data\") pod \"nova-api-0\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.762683 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3113148b-3177-44d4-b53a-3b80a848b180-logs\") pod \"nova-api-0\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.762761 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndtcb\" (UniqueName: \"kubernetes.io/projected/3113148b-3177-44d4-b53a-3b80a848b180-kube-api-access-ndtcb\") pod \"nova-api-0\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.782689 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.784164 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.788663 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.797769 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.833570 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.866069 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-config-data\") pod \"nova-api-0\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.866124 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3113148b-3177-44d4-b53a-3b80a848b180-logs\") pod \"nova-api-0\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.866145 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580db61b-24f2-490e-93fa-d07d9a55d6fb-config-data\") pod \"nova-scheduler-0\" (UID: \"580db61b-24f2-490e-93fa-d07d9a55d6fb\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.866175 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndtcb\" (UniqueName: \"kubernetes.io/projected/3113148b-3177-44d4-b53a-3b80a848b180-kube-api-access-ndtcb\") pod \"nova-api-0\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.866299 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.866316 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580db61b-24f2-490e-93fa-d07d9a55d6fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"580db61b-24f2-490e-93fa-d07d9a55d6fb\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.866337 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7zdw\" (UniqueName: \"kubernetes.io/projected/580db61b-24f2-490e-93fa-d07d9a55d6fb-kube-api-access-v7zdw\") pod \"nova-scheduler-0\" (UID: \"580db61b-24f2-490e-93fa-d07d9a55d6fb\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.882608 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3113148b-3177-44d4-b53a-3b80a848b180-logs\") pod \"nova-api-0\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.884829 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-config-data\") pod \"nova-api-0\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.904146 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.924432 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.940003 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndtcb\" (UniqueName: \"kubernetes.io/projected/3113148b-3177-44d4-b53a-3b80a848b180-kube-api-access-ndtcb\") pod \"nova-api-0\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " pod="openstack/nova-api-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.968265 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580db61b-24f2-490e-93fa-d07d9a55d6fb-config-data\") pod \"nova-scheduler-0\" (UID: \"580db61b-24f2-490e-93fa-d07d9a55d6fb\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.968434 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580db61b-24f2-490e-93fa-d07d9a55d6fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"580db61b-24f2-490e-93fa-d07d9a55d6fb\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.968459 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7zdw\" (UniqueName: \"kubernetes.io/projected/580db61b-24f2-490e-93fa-d07d9a55d6fb-kube-api-access-v7zdw\") pod \"nova-scheduler-0\" (UID: \"580db61b-24f2-490e-93fa-d07d9a55d6fb\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.973024 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580db61b-24f2-490e-93fa-d07d9a55d6fb-config-data\") pod \"nova-scheduler-0\" (UID: \"580db61b-24f2-490e-93fa-d07d9a55d6fb\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:15 crc kubenswrapper[4772]: I1128 11:27:15.987257 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580db61b-24f2-490e-93fa-d07d9a55d6fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"580db61b-24f2-490e-93fa-d07d9a55d6fb\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.024263 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7zdw\" (UniqueName: \"kubernetes.io/projected/580db61b-24f2-490e-93fa-d07d9a55d6fb-kube-api-access-v7zdw\") pod \"nova-scheduler-0\" (UID: \"580db61b-24f2-490e-93fa-d07d9a55d6fb\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.028215 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.031563 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.039476 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.049205 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.091878 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34e3a875-5116-44e0-9bd1-f5654a261a69-logs\") pod \"nova-metadata-0\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.092013 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz628\" (UniqueName: \"kubernetes.io/projected/34e3a875-5116-44e0-9bd1-f5654a261a69-kube-api-access-vz628\") pod \"nova-metadata-0\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.092094 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e3a875-5116-44e0-9bd1-f5654a261a69-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.092177 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e3a875-5116-44e0-9bd1-f5654a261a69-config-data\") pod \"nova-metadata-0\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.115511 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.194847 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e3a875-5116-44e0-9bd1-f5654a261a69-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.195249 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e3a875-5116-44e0-9bd1-f5654a261a69-config-data\") pod \"nova-metadata-0\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.195299 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34e3a875-5116-44e0-9bd1-f5654a261a69-logs\") pod \"nova-metadata-0\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.195390 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz628\" (UniqueName: \"kubernetes.io/projected/34e3a875-5116-44e0-9bd1-f5654a261a69-kube-api-access-vz628\") pod \"nova-metadata-0\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.196406 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34e3a875-5116-44e0-9bd1-f5654a261a69-logs\") pod \"nova-metadata-0\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.215256 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e3a875-5116-44e0-9bd1-f5654a261a69-config-data\") pod \"nova-metadata-0\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.217219 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e3a875-5116-44e0-9bd1-f5654a261a69-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.221422 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.231010 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.237058 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.249588 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-vrxt8"] Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.254747 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.264327 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.302091 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz628\" (UniqueName: \"kubernetes.io/projected/34e3a875-5116-44e0-9bd1-f5654a261a69-kube-api-access-vz628\") pod \"nova-metadata-0\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.305622 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkcfb\" (UniqueName: \"kubernetes.io/projected/fa2cbac3-6468-4bf0-9929-29f9157fff5e-kube-api-access-fkcfb\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.305725 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.305889 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.305921 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-config\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.305957 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2cbac3-6468-4bf0-9929-29f9157fff5e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.306017 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.306054 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwqbh\" (UniqueName: \"kubernetes.io/projected/472f1041-5c63-4f6d-997c-db8b89dfaacf-kube-api-access-kwqbh\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.306133 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-dns-svc\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.306169 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2cbac3-6468-4bf0-9929-29f9157fff5e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.310070 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.369066 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-vrxt8"] Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.407891 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-dns-svc\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.407965 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2cbac3-6468-4bf0-9929-29f9157fff5e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.408023 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkcfb\" (UniqueName: \"kubernetes.io/projected/fa2cbac3-6468-4bf0-9929-29f9157fff5e-kube-api-access-fkcfb\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.408059 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.408126 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.408147 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-config\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.408171 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2cbac3-6468-4bf0-9929-29f9157fff5e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.408207 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.408231 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwqbh\" (UniqueName: \"kubernetes.io/projected/472f1041-5c63-4f6d-997c-db8b89dfaacf-kube-api-access-kwqbh\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.409866 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-dns-svc\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.410465 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.410990 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-config\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.411845 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.412550 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.415333 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2cbac3-6468-4bf0-9929-29f9157fff5e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.418989 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2cbac3-6468-4bf0-9929-29f9157fff5e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.437634 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.442143 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkcfb\" (UniqueName: \"kubernetes.io/projected/fa2cbac3-6468-4bf0-9929-29f9157fff5e-kube-api-access-fkcfb\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.443831 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwqbh\" (UniqueName: \"kubernetes.io/projected/472f1041-5c63-4f6d-997c-db8b89dfaacf-kube-api-access-kwqbh\") pod \"dnsmasq-dns-bccf8f775-vrxt8\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.599285 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.624265 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.627350 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-68hmw"] Nov 28 11:27:16 crc kubenswrapper[4772]: W1128 11:27:16.657775 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode87f3486_967b_47b7_87ff_b3c11d66e63c.slice/crio-8d84ebdaf5790fbd715535a4b4a089e77fba1190d304ab0e343478d5e6de274c WatchSource:0}: Error finding container 8d84ebdaf5790fbd715535a4b4a089e77fba1190d304ab0e343478d5e6de274c: Status 404 returned error can't find the container with id 8d84ebdaf5790fbd715535a4b4a089e77fba1190d304ab0e343478d5e6de274c Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.743932 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:27:16 crc kubenswrapper[4772]: I1128 11:27:16.884941 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.064879 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.214895 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.336807 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-scripts\") pod \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.337084 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffab9862-3ac8-4a03-8ac1-3935bbddc294-log-httpd\") pod \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.337136 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-sg-core-conf-yaml\") pod \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.337231 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-combined-ca-bundle\") pod \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.337285 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9svr\" (UniqueName: \"kubernetes.io/projected/ffab9862-3ac8-4a03-8ac1-3935bbddc294-kube-api-access-p9svr\") pod \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.337314 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffab9862-3ac8-4a03-8ac1-3935bbddc294-run-httpd\") pod \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.337333 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-config-data\") pod \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\" (UID: \"ffab9862-3ac8-4a03-8ac1-3935bbddc294\") " Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.337856 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffab9862-3ac8-4a03-8ac1-3935bbddc294-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ffab9862-3ac8-4a03-8ac1-3935bbddc294" (UID: "ffab9862-3ac8-4a03-8ac1-3935bbddc294"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.340482 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffab9862-3ac8-4a03-8ac1-3935bbddc294-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ffab9862-3ac8-4a03-8ac1-3935bbddc294" (UID: "ffab9862-3ac8-4a03-8ac1-3935bbddc294"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.347641 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffab9862-3ac8-4a03-8ac1-3935bbddc294-kube-api-access-p9svr" (OuterVolumeSpecName: "kube-api-access-p9svr") pod "ffab9862-3ac8-4a03-8ac1-3935bbddc294" (UID: "ffab9862-3ac8-4a03-8ac1-3935bbddc294"). InnerVolumeSpecName "kube-api-access-p9svr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:27:17 crc kubenswrapper[4772]: W1128 11:27:17.356690 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa2cbac3_6468_4bf0_9929_29f9157fff5e.slice/crio-33bf8ace0fd45a8cca62aefdc525b8c5a390313f7374ecba0fac0daaa51cd097 WatchSource:0}: Error finding container 33bf8ace0fd45a8cca62aefdc525b8c5a390313f7374ecba0fac0daaa51cd097: Status 404 returned error can't find the container with id 33bf8ace0fd45a8cca62aefdc525b8c5a390313f7374ecba0fac0daaa51cd097 Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.357550 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-scripts" (OuterVolumeSpecName: "scripts") pod "ffab9862-3ac8-4a03-8ac1-3935bbddc294" (UID: "ffab9862-3ac8-4a03-8ac1-3935bbddc294"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.361687 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.387105 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ffab9862-3ac8-4a03-8ac1-3935bbddc294" (UID: "ffab9862-3ac8-4a03-8ac1-3935bbddc294"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.396531 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-vrxt8"] Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.429435 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lpnr6"] Nov 28 11:27:17 crc kubenswrapper[4772]: E1128 11:27:17.429938 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="proxy-httpd" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.429954 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="proxy-httpd" Nov 28 11:27:17 crc kubenswrapper[4772]: E1128 11:27:17.429978 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="ceilometer-central-agent" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.429985 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="ceilometer-central-agent" Nov 28 11:27:17 crc kubenswrapper[4772]: E1128 11:27:17.430003 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="ceilometer-notification-agent" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.430009 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="ceilometer-notification-agent" Nov 28 11:27:17 crc kubenswrapper[4772]: E1128 11:27:17.430022 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="sg-core" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.430027 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="sg-core" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.430201 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="ceilometer-notification-agent" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.430213 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="ceilometer-central-agent" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.430223 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="proxy-httpd" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.430233 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerName="sg-core" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.431141 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.436623 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.440640 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.444753 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.444789 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffab9862-3ac8-4a03-8ac1-3935bbddc294-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.444800 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.444813 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9svr\" (UniqueName: \"kubernetes.io/projected/ffab9862-3ac8-4a03-8ac1-3935bbddc294-kube-api-access-p9svr\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.444825 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffab9862-3ac8-4a03-8ac1-3935bbddc294-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.478663 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lpnr6"] Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.479579 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffab9862-3ac8-4a03-8ac1-3935bbddc294" (UID: "ffab9862-3ac8-4a03-8ac1-3935bbddc294"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.545442 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-config-data" (OuterVolumeSpecName: "config-data") pod "ffab9862-3ac8-4a03-8ac1-3935bbddc294" (UID: "ffab9862-3ac8-4a03-8ac1-3935bbddc294"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.548265 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrrv6\" (UniqueName: \"kubernetes.io/projected/bf647836-f37f-448a-ac84-c610cf7c0125-kube-api-access-zrrv6\") pod \"nova-cell1-conductor-db-sync-lpnr6\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.548344 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-scripts\") pod \"nova-cell1-conductor-db-sync-lpnr6\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.548459 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-config-data\") pod \"nova-cell1-conductor-db-sync-lpnr6\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.548523 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lpnr6\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.548585 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.548596 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffab9862-3ac8-4a03-8ac1-3935bbddc294-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.578628 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"580db61b-24f2-490e-93fa-d07d9a55d6fb","Type":"ContainerStarted","Data":"62835bf19eb6d7686f0757e54fa9047fab21d9d36f8095206593fef63a584643"} Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.593896 4772 generic.go:334] "Generic (PLEG): container finished" podID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" containerID="3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de" exitCode=0 Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.593973 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffab9862-3ac8-4a03-8ac1-3935bbddc294","Type":"ContainerDied","Data":"3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de"} Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.594009 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffab9862-3ac8-4a03-8ac1-3935bbddc294","Type":"ContainerDied","Data":"6753fa75f4c1d34c7198d3c135de78d8bc26e53b6874f2bcd857f08839e31469"} Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.594028 4772 scope.go:117] "RemoveContainer" containerID="007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.594179 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.610766 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34e3a875-5116-44e0-9bd1-f5654a261a69","Type":"ContainerStarted","Data":"4bd4d2ca325f8353c999c937ca60b0fd387baa7eed89ebb37f49891e4a2a0016"} Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.621733 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3113148b-3177-44d4-b53a-3b80a848b180","Type":"ContainerStarted","Data":"46d09a5f7141ee58904c998f41707f2e06a2e90fe2ab90b2ee9bebaeffbef841"} Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.623289 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-68hmw" event={"ID":"e87f3486-967b-47b7-87ff-b3c11d66e63c","Type":"ContainerStarted","Data":"24f05ed572a5fcc4353f42b0c4e40ca5c6e80d443b4544ff27d735fb05f312b5"} Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.623327 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-68hmw" event={"ID":"e87f3486-967b-47b7-87ff-b3c11d66e63c","Type":"ContainerStarted","Data":"8d84ebdaf5790fbd715535a4b4a089e77fba1190d304ab0e343478d5e6de274c"} Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.627690 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" event={"ID":"472f1041-5c63-4f6d-997c-db8b89dfaacf","Type":"ContainerStarted","Data":"629e3cd2e684f8ad4cdf76a3cb0e532f849ad8d061174db0ac5268c6b97d3714"} Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.650414 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-config-data\") pod \"nova-cell1-conductor-db-sync-lpnr6\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.650493 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lpnr6\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.650544 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrrv6\" (UniqueName: \"kubernetes.io/projected/bf647836-f37f-448a-ac84-c610cf7c0125-kube-api-access-zrrv6\") pod \"nova-cell1-conductor-db-sync-lpnr6\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.650595 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-scripts\") pod \"nova-cell1-conductor-db-sync-lpnr6\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.651442 4772 scope.go:117] "RemoveContainer" containerID="850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.651627 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa2cbac3-6468-4bf0-9929-29f9157fff5e","Type":"ContainerStarted","Data":"33bf8ace0fd45a8cca62aefdc525b8c5a390313f7374ecba0fac0daaa51cd097"} Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.659242 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lpnr6\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.665073 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-config-data\") pod \"nova-cell1-conductor-db-sync-lpnr6\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.665637 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-scripts\") pod \"nova-cell1-conductor-db-sync-lpnr6\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.683151 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.689663 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrrv6\" (UniqueName: \"kubernetes.io/projected/bf647836-f37f-448a-ac84-c610cf7c0125-kube-api-access-zrrv6\") pod \"nova-cell1-conductor-db-sync-lpnr6\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.703419 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.713441 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.716633 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.720510 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.721852 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.722071 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.724464 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-68hmw" podStartSLOduration=2.724439181 podStartE2EDuration="2.724439181s" podCreationTimestamp="2025-11-28 11:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:27:17.666893201 +0000 UTC m=+1235.990136438" watchObservedRunningTime="2025-11-28 11:27:17.724439181 +0000 UTC m=+1236.047682408" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.740673 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.770820 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.855932 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-config-data\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.856088 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21a2205c-7bba-4f48-b2f7-03196fa277ac-run-httpd\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.856222 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.857341 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21a2205c-7bba-4f48-b2f7-03196fa277ac-log-httpd\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.857401 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.857462 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfpjs\" (UniqueName: \"kubernetes.io/projected/21a2205c-7bba-4f48-b2f7-03196fa277ac-kube-api-access-hfpjs\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.857514 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.857746 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-scripts\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.957767 4772 scope.go:117] "RemoveContainer" containerID="3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.959678 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21a2205c-7bba-4f48-b2f7-03196fa277ac-log-httpd\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.959712 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.959744 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfpjs\" (UniqueName: \"kubernetes.io/projected/21a2205c-7bba-4f48-b2f7-03196fa277ac-kube-api-access-hfpjs\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.959771 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.959827 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-scripts\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.959861 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-config-data\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.959889 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21a2205c-7bba-4f48-b2f7-03196fa277ac-run-httpd\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.959940 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.962039 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21a2205c-7bba-4f48-b2f7-03196fa277ac-run-httpd\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.968150 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.969090 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21a2205c-7bba-4f48-b2f7-03196fa277ac-log-httpd\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.972947 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-scripts\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.976030 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-config-data\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.977334 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:17 crc kubenswrapper[4772]: I1128 11:27:17.978289 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.004847 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfpjs\" (UniqueName: \"kubernetes.io/projected/21a2205c-7bba-4f48-b2f7-03196fa277ac-kube-api-access-hfpjs\") pod \"ceilometer-0\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " pod="openstack/ceilometer-0" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.049685 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffab9862-3ac8-4a03-8ac1-3935bbddc294" path="/var/lib/kubelet/pods/ffab9862-3ac8-4a03-8ac1-3935bbddc294/volumes" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.071335 4772 scope.go:117] "RemoveContainer" containerID="7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.121465 4772 scope.go:117] "RemoveContainer" containerID="007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d" Nov 28 11:27:18 crc kubenswrapper[4772]: E1128 11:27:18.122070 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d\": container with ID starting with 007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d not found: ID does not exist" containerID="007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.122125 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d"} err="failed to get container status \"007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d\": rpc error: code = NotFound desc = could not find container \"007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d\": container with ID starting with 007595cec97a6a92ec7a7d529af49f3edceaa4c0b5a8d64eddb3e7b374edd68d not found: ID does not exist" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.122156 4772 scope.go:117] "RemoveContainer" containerID="850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac" Nov 28 11:27:18 crc kubenswrapper[4772]: E1128 11:27:18.122541 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac\": container with ID starting with 850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac not found: ID does not exist" containerID="850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.122570 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac"} err="failed to get container status \"850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac\": rpc error: code = NotFound desc = could not find container \"850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac\": container with ID starting with 850fbb2ad540ad6417d50f57a1e59d5994dfbce102d70c9af36fb4c687cb15ac not found: ID does not exist" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.122587 4772 scope.go:117] "RemoveContainer" containerID="3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de" Nov 28 11:27:18 crc kubenswrapper[4772]: E1128 11:27:18.123120 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de\": container with ID starting with 3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de not found: ID does not exist" containerID="3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.123214 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de"} err="failed to get container status \"3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de\": rpc error: code = NotFound desc = could not find container \"3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de\": container with ID starting with 3adeeb58621dd9bac18a1615ba88a750255be7fc3e5e9f6ff09825ece35867de not found: ID does not exist" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.123268 4772 scope.go:117] "RemoveContainer" containerID="7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265" Nov 28 11:27:18 crc kubenswrapper[4772]: E1128 11:27:18.123967 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265\": container with ID starting with 7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265 not found: ID does not exist" containerID="7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.124015 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265"} err="failed to get container status \"7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265\": rpc error: code = NotFound desc = could not find container \"7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265\": container with ID starting with 7843e8897d7706d954490602adeacc5680d781704959fb09b9c9ad812e913265 not found: ID does not exist" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.255172 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.328345 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lpnr6"] Nov 28 11:27:18 crc kubenswrapper[4772]: W1128 11:27:18.332192 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf647836_f37f_448a_ac84_c610cf7c0125.slice/crio-07863ad84a54c9e1f1f1d332464d96124cd5fef98baecad0a9bc66979cb0ccce WatchSource:0}: Error finding container 07863ad84a54c9e1f1f1d332464d96124cd5fef98baecad0a9bc66979cb0ccce: Status 404 returned error can't find the container with id 07863ad84a54c9e1f1f1d332464d96124cd5fef98baecad0a9bc66979cb0ccce Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.709249 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lpnr6" event={"ID":"bf647836-f37f-448a-ac84-c610cf7c0125","Type":"ContainerStarted","Data":"07863ad84a54c9e1f1f1d332464d96124cd5fef98baecad0a9bc66979cb0ccce"} Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.712282 4772 generic.go:334] "Generic (PLEG): container finished" podID="472f1041-5c63-4f6d-997c-db8b89dfaacf" containerID="2469a55ac671332b6c3c01bd8fec3decb81536081cb3c60ad4d7afa1f4341c8f" exitCode=0 Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.712415 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" event={"ID":"472f1041-5c63-4f6d-997c-db8b89dfaacf","Type":"ContainerDied","Data":"2469a55ac671332b6c3c01bd8fec3decb81536081cb3c60ad4d7afa1f4341c8f"} Nov 28 11:27:18 crc kubenswrapper[4772]: I1128 11:27:18.827390 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:27:19 crc kubenswrapper[4772]: W1128 11:27:19.151097 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21a2205c_7bba_4f48_b2f7_03196fa277ac.slice/crio-caa8acf20ebfa79e910043e9b0aa172e39e8fcccd62a1045ea76c31b00807aa0 WatchSource:0}: Error finding container caa8acf20ebfa79e910043e9b0aa172e39e8fcccd62a1045ea76c31b00807aa0: Status 404 returned error can't find the container with id caa8acf20ebfa79e910043e9b0aa172e39e8fcccd62a1045ea76c31b00807aa0 Nov 28 11:27:19 crc kubenswrapper[4772]: I1128 11:27:19.691234 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:27:19 crc kubenswrapper[4772]: I1128 11:27:19.728958 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21a2205c-7bba-4f48-b2f7-03196fa277ac","Type":"ContainerStarted","Data":"caa8acf20ebfa79e910043e9b0aa172e39e8fcccd62a1045ea76c31b00807aa0"} Nov 28 11:27:19 crc kubenswrapper[4772]: I1128 11:27:19.731675 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lpnr6" event={"ID":"bf647836-f37f-448a-ac84-c610cf7c0125","Type":"ContainerStarted","Data":"bb6d3d966ff672c7337ae5a3f560b4a3bd0f73652b049cc3f8f575f5bf568d4a"} Nov 28 11:27:19 crc kubenswrapper[4772]: I1128 11:27:19.754928 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-lpnr6" podStartSLOduration=2.754901631 podStartE2EDuration="2.754901631s" podCreationTimestamp="2025-11-28 11:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:27:19.752838337 +0000 UTC m=+1238.076081564" watchObservedRunningTime="2025-11-28 11:27:19.754901631 +0000 UTC m=+1238.078144858" Nov 28 11:27:19 crc kubenswrapper[4772]: I1128 11:27:19.786478 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.768408 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"580db61b-24f2-490e-93fa-d07d9a55d6fb","Type":"ContainerStarted","Data":"26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da"} Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.772451 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34e3a875-5116-44e0-9bd1-f5654a261a69","Type":"ContainerStarted","Data":"4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda"} Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.772485 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34e3a875-5116-44e0-9bd1-f5654a261a69","Type":"ContainerStarted","Data":"068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae"} Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.772569 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34e3a875-5116-44e0-9bd1-f5654a261a69" containerName="nova-metadata-log" containerID="cri-o://068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae" gracePeriod=30 Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.772698 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34e3a875-5116-44e0-9bd1-f5654a261a69" containerName="nova-metadata-metadata" containerID="cri-o://4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda" gracePeriod=30 Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.781740 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3113148b-3177-44d4-b53a-3b80a848b180","Type":"ContainerStarted","Data":"cbc35f8fb6b97108128d753e8ca7dae2583ed5a69f3a03806ae7a85f5227623f"} Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.781776 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3113148b-3177-44d4-b53a-3b80a848b180","Type":"ContainerStarted","Data":"d9b2f8cfd0f662bfc606ac72f944bc6fab6b20ce619a537af5fc029652d28e04"} Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.791581 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.715481599 podStartE2EDuration="6.791557312s" podCreationTimestamp="2025-11-28 11:27:15 +0000 UTC" firstStartedPulling="2025-11-28 11:27:16.781666046 +0000 UTC m=+1235.104909273" lastFinishedPulling="2025-11-28 11:27:20.857741759 +0000 UTC m=+1239.180984986" observedRunningTime="2025-11-28 11:27:21.785713841 +0000 UTC m=+1240.108957078" watchObservedRunningTime="2025-11-28 11:27:21.791557312 +0000 UTC m=+1240.114800539" Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.797929 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" event={"ID":"472f1041-5c63-4f6d-997c-db8b89dfaacf","Type":"ContainerStarted","Data":"8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8"} Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.798178 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.810112 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="fa2cbac3-6468-4bf0-9929-29f9157fff5e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f" gracePeriod=30 Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.812005 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa2cbac3-6468-4bf0-9929-29f9157fff5e","Type":"ContainerStarted","Data":"6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f"} Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.824548 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.864869936 podStartE2EDuration="6.824526385s" podCreationTimestamp="2025-11-28 11:27:15 +0000 UTC" firstStartedPulling="2025-11-28 11:27:16.894817905 +0000 UTC m=+1235.218061132" lastFinishedPulling="2025-11-28 11:27:20.854474354 +0000 UTC m=+1239.177717581" observedRunningTime="2025-11-28 11:27:21.821535368 +0000 UTC m=+1240.144778605" watchObservedRunningTime="2025-11-28 11:27:21.824526385 +0000 UTC m=+1240.147769612" Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.833327 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21a2205c-7bba-4f48-b2f7-03196fa277ac","Type":"ContainerStarted","Data":"fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1"} Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.863074 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.123376177 podStartE2EDuration="6.863047032s" podCreationTimestamp="2025-11-28 11:27:15 +0000 UTC" firstStartedPulling="2025-11-28 11:27:17.117313994 +0000 UTC m=+1235.440557221" lastFinishedPulling="2025-11-28 11:27:20.856984849 +0000 UTC m=+1239.180228076" observedRunningTime="2025-11-28 11:27:21.846855843 +0000 UTC m=+1240.170099090" watchObservedRunningTime="2025-11-28 11:27:21.863047032 +0000 UTC m=+1240.186290249" Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.897986 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" podStartSLOduration=5.897961476 podStartE2EDuration="5.897961476s" podCreationTimestamp="2025-11-28 11:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:27:21.890888083 +0000 UTC m=+1240.214131310" watchObservedRunningTime="2025-11-28 11:27:21.897961476 +0000 UTC m=+1240.221204693" Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.931207 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.451106281 podStartE2EDuration="5.931180436s" podCreationTimestamp="2025-11-28 11:27:16 +0000 UTC" firstStartedPulling="2025-11-28 11:27:17.361051214 +0000 UTC m=+1235.684294441" lastFinishedPulling="2025-11-28 11:27:20.841125369 +0000 UTC m=+1239.164368596" observedRunningTime="2025-11-28 11:27:21.923525458 +0000 UTC m=+1240.246768685" watchObservedRunningTime="2025-11-28 11:27:21.931180436 +0000 UTC m=+1240.254423663" Nov 28 11:27:21 crc kubenswrapper[4772]: I1128 11:27:21.936604 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 11:27:22 crc kubenswrapper[4772]: I1128 11:27:22.863430 4772 generic.go:334] "Generic (PLEG): container finished" podID="34e3a875-5116-44e0-9bd1-f5654a261a69" containerID="068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae" exitCode=143 Nov 28 11:27:22 crc kubenswrapper[4772]: I1128 11:27:22.863648 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34e3a875-5116-44e0-9bd1-f5654a261a69","Type":"ContainerDied","Data":"068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae"} Nov 28 11:27:22 crc kubenswrapper[4772]: I1128 11:27:22.881770 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21a2205c-7bba-4f48-b2f7-03196fa277ac","Type":"ContainerStarted","Data":"e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941"} Nov 28 11:27:23 crc kubenswrapper[4772]: I1128 11:27:23.895014 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21a2205c-7bba-4f48-b2f7-03196fa277ac","Type":"ContainerStarted","Data":"075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f"} Nov 28 11:27:23 crc kubenswrapper[4772]: I1128 11:27:23.896092 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:27:23 crc kubenswrapper[4772]: I1128 11:27:23.896160 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:27:24 crc kubenswrapper[4772]: I1128 11:27:24.910321 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21a2205c-7bba-4f48-b2f7-03196fa277ac","Type":"ContainerStarted","Data":"318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2"} Nov 28 11:27:24 crc kubenswrapper[4772]: I1128 11:27:24.911383 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 11:27:24 crc kubenswrapper[4772]: I1128 11:27:24.938170 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.552813645 podStartE2EDuration="7.938144244s" podCreationTimestamp="2025-11-28 11:27:17 +0000 UTC" firstStartedPulling="2025-11-28 11:27:20.01216737 +0000 UTC m=+1238.335410587" lastFinishedPulling="2025-11-28 11:27:24.397497959 +0000 UTC m=+1242.720741186" observedRunningTime="2025-11-28 11:27:24.936734488 +0000 UTC m=+1243.259977725" watchObservedRunningTime="2025-11-28 11:27:24.938144244 +0000 UTC m=+1243.261387471" Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.117450 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.117781 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.256101 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.256146 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.304199 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.438872 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.438971 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.601401 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.626583 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.728070 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pv7n6"] Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.728663 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" podUID="923ff71f-a546-41a6-a825-d01d907c7763" containerName="dnsmasq-dns" containerID="cri-o://dc28d29af8f70c7cb4b515a47b7688cbd5d1918fd5c4fc8036912a89fbe3c272" gracePeriod=10 Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.938083 4772 generic.go:334] "Generic (PLEG): container finished" podID="923ff71f-a546-41a6-a825-d01d907c7763" containerID="dc28d29af8f70c7cb4b515a47b7688cbd5d1918fd5c4fc8036912a89fbe3c272" exitCode=0 Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.938183 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" event={"ID":"923ff71f-a546-41a6-a825-d01d907c7763","Type":"ContainerDied","Data":"dc28d29af8f70c7cb4b515a47b7688cbd5d1918fd5c4fc8036912a89fbe3c272"} Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.948413 4772 generic.go:334] "Generic (PLEG): container finished" podID="e87f3486-967b-47b7-87ff-b3c11d66e63c" containerID="24f05ed572a5fcc4353f42b0c4e40ca5c6e80d443b4544ff27d735fb05f312b5" exitCode=0 Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.948578 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-68hmw" event={"ID":"e87f3486-967b-47b7-87ff-b3c11d66e63c","Type":"ContainerDied","Data":"24f05ed572a5fcc4353f42b0c4e40ca5c6e80d443b4544ff27d735fb05f312b5"} Nov 28 11:27:26 crc kubenswrapper[4772]: I1128 11:27:26.996381 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.204600 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3113148b-3177-44d4-b53a-3b80a848b180" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.204703 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3113148b-3177-44d4-b53a-3b80a848b180" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.391432 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.454289 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-config\") pod \"923ff71f-a546-41a6-a825-d01d907c7763\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.454469 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sf6q\" (UniqueName: \"kubernetes.io/projected/923ff71f-a546-41a6-a825-d01d907c7763-kube-api-access-8sf6q\") pod \"923ff71f-a546-41a6-a825-d01d907c7763\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.454724 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-dns-swift-storage-0\") pod \"923ff71f-a546-41a6-a825-d01d907c7763\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.454766 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-ovsdbserver-nb\") pod \"923ff71f-a546-41a6-a825-d01d907c7763\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.454821 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-ovsdbserver-sb\") pod \"923ff71f-a546-41a6-a825-d01d907c7763\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.454897 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-dns-svc\") pod \"923ff71f-a546-41a6-a825-d01d907c7763\" (UID: \"923ff71f-a546-41a6-a825-d01d907c7763\") " Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.489809 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/923ff71f-a546-41a6-a825-d01d907c7763-kube-api-access-8sf6q" (OuterVolumeSpecName: "kube-api-access-8sf6q") pod "923ff71f-a546-41a6-a825-d01d907c7763" (UID: "923ff71f-a546-41a6-a825-d01d907c7763"). InnerVolumeSpecName "kube-api-access-8sf6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.558288 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sf6q\" (UniqueName: \"kubernetes.io/projected/923ff71f-a546-41a6-a825-d01d907c7763-kube-api-access-8sf6q\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.559863 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "923ff71f-a546-41a6-a825-d01d907c7763" (UID: "923ff71f-a546-41a6-a825-d01d907c7763"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.641787 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "923ff71f-a546-41a6-a825-d01d907c7763" (UID: "923ff71f-a546-41a6-a825-d01d907c7763"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.646916 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "923ff71f-a546-41a6-a825-d01d907c7763" (UID: "923ff71f-a546-41a6-a825-d01d907c7763"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.647946 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "923ff71f-a546-41a6-a825-d01d907c7763" (UID: "923ff71f-a546-41a6-a825-d01d907c7763"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.655069 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-config" (OuterVolumeSpecName: "config") pod "923ff71f-a546-41a6-a825-d01d907c7763" (UID: "923ff71f-a546-41a6-a825-d01d907c7763"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.661912 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.662233 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.662295 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.662377 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.662438 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923ff71f-a546-41a6-a825-d01d907c7763-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.968898 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" event={"ID":"923ff71f-a546-41a6-a825-d01d907c7763","Type":"ContainerDied","Data":"f791b60d91d14da7405b2bc912fa62ce8934a22816658a71c6fa0f7bee45a335"} Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.968962 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-pv7n6" Nov 28 11:27:27 crc kubenswrapper[4772]: I1128 11:27:27.968980 4772 scope.go:117] "RemoveContainer" containerID="dc28d29af8f70c7cb4b515a47b7688cbd5d1918fd5c4fc8036912a89fbe3c272" Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.017665 4772 scope.go:117] "RemoveContainer" containerID="494dc4d457caf70532bedc2c10a5ca239db3678d8b2c79eb3577d3cd10a7974f" Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.020153 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pv7n6"] Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.029851 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pv7n6"] Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.450374 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.486438 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-scripts\") pod \"e87f3486-967b-47b7-87ff-b3c11d66e63c\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.486568 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkghb\" (UniqueName: \"kubernetes.io/projected/e87f3486-967b-47b7-87ff-b3c11d66e63c-kube-api-access-bkghb\") pod \"e87f3486-967b-47b7-87ff-b3c11d66e63c\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.486793 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-config-data\") pod \"e87f3486-967b-47b7-87ff-b3c11d66e63c\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.486959 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-combined-ca-bundle\") pod \"e87f3486-967b-47b7-87ff-b3c11d66e63c\" (UID: \"e87f3486-967b-47b7-87ff-b3c11d66e63c\") " Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.500736 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e87f3486-967b-47b7-87ff-b3c11d66e63c-kube-api-access-bkghb" (OuterVolumeSpecName: "kube-api-access-bkghb") pod "e87f3486-967b-47b7-87ff-b3c11d66e63c" (UID: "e87f3486-967b-47b7-87ff-b3c11d66e63c"). InnerVolumeSpecName "kube-api-access-bkghb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.531853 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-scripts" (OuterVolumeSpecName: "scripts") pod "e87f3486-967b-47b7-87ff-b3c11d66e63c" (UID: "e87f3486-967b-47b7-87ff-b3c11d66e63c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.547183 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-config-data" (OuterVolumeSpecName: "config-data") pod "e87f3486-967b-47b7-87ff-b3c11d66e63c" (UID: "e87f3486-967b-47b7-87ff-b3c11d66e63c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.565439 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e87f3486-967b-47b7-87ff-b3c11d66e63c" (UID: "e87f3486-967b-47b7-87ff-b3c11d66e63c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.589998 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.590253 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkghb\" (UniqueName: \"kubernetes.io/projected/e87f3486-967b-47b7-87ff-b3c11d66e63c-kube-api-access-bkghb\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.590329 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.590526 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87f3486-967b-47b7-87ff-b3c11d66e63c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.985594 4772 generic.go:334] "Generic (PLEG): container finished" podID="bf647836-f37f-448a-ac84-c610cf7c0125" containerID="bb6d3d966ff672c7337ae5a3f560b4a3bd0f73652b049cc3f8f575f5bf568d4a" exitCode=0 Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.985693 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lpnr6" event={"ID":"bf647836-f37f-448a-ac84-c610cf7c0125","Type":"ContainerDied","Data":"bb6d3d966ff672c7337ae5a3f560b4a3bd0f73652b049cc3f8f575f5bf568d4a"} Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.988418 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-68hmw" event={"ID":"e87f3486-967b-47b7-87ff-b3c11d66e63c","Type":"ContainerDied","Data":"8d84ebdaf5790fbd715535a4b4a089e77fba1190d304ab0e343478d5e6de274c"} Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.988460 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d84ebdaf5790fbd715535a4b4a089e77fba1190d304ab0e343478d5e6de274c" Nov 28 11:27:28 crc kubenswrapper[4772]: I1128 11:27:28.988576 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-68hmw" Nov 28 11:27:29 crc kubenswrapper[4772]: I1128 11:27:29.185385 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:27:29 crc kubenswrapper[4772]: I1128 11:27:29.192688 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3113148b-3177-44d4-b53a-3b80a848b180" containerName="nova-api-log" containerID="cri-o://d9b2f8cfd0f662bfc606ac72f944bc6fab6b20ce619a537af5fc029652d28e04" gracePeriod=30 Nov 28 11:27:29 crc kubenswrapper[4772]: I1128 11:27:29.192912 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3113148b-3177-44d4-b53a-3b80a848b180" containerName="nova-api-api" containerID="cri-o://cbc35f8fb6b97108128d753e8ca7dae2583ed5a69f3a03806ae7a85f5227623f" gracePeriod=30 Nov 28 11:27:29 crc kubenswrapper[4772]: I1128 11:27:29.205815 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:27:29 crc kubenswrapper[4772]: I1128 11:27:29.206088 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="580db61b-24f2-490e-93fa-d07d9a55d6fb" containerName="nova-scheduler-scheduler" containerID="cri-o://26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da" gracePeriod=30 Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.002048 4772 generic.go:334] "Generic (PLEG): container finished" podID="3113148b-3177-44d4-b53a-3b80a848b180" containerID="d9b2f8cfd0f662bfc606ac72f944bc6fab6b20ce619a537af5fc029652d28e04" exitCode=143 Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.011758 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="923ff71f-a546-41a6-a825-d01d907c7763" path="/var/lib/kubelet/pods/923ff71f-a546-41a6-a825-d01d907c7763/volumes" Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.012723 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3113148b-3177-44d4-b53a-3b80a848b180","Type":"ContainerDied","Data":"d9b2f8cfd0f662bfc606ac72f944bc6fab6b20ce619a537af5fc029652d28e04"} Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.407688 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.434585 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-scripts\") pod \"bf647836-f37f-448a-ac84-c610cf7c0125\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.434655 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-combined-ca-bundle\") pod \"bf647836-f37f-448a-ac84-c610cf7c0125\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.434713 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-config-data\") pod \"bf647836-f37f-448a-ac84-c610cf7c0125\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.434765 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrrv6\" (UniqueName: \"kubernetes.io/projected/bf647836-f37f-448a-ac84-c610cf7c0125-kube-api-access-zrrv6\") pod \"bf647836-f37f-448a-ac84-c610cf7c0125\" (UID: \"bf647836-f37f-448a-ac84-c610cf7c0125\") " Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.441456 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-scripts" (OuterVolumeSpecName: "scripts") pod "bf647836-f37f-448a-ac84-c610cf7c0125" (UID: "bf647836-f37f-448a-ac84-c610cf7c0125"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.443992 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf647836-f37f-448a-ac84-c610cf7c0125-kube-api-access-zrrv6" (OuterVolumeSpecName: "kube-api-access-zrrv6") pod "bf647836-f37f-448a-ac84-c610cf7c0125" (UID: "bf647836-f37f-448a-ac84-c610cf7c0125"). InnerVolumeSpecName "kube-api-access-zrrv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.475303 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf647836-f37f-448a-ac84-c610cf7c0125" (UID: "bf647836-f37f-448a-ac84-c610cf7c0125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.483131 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-config-data" (OuterVolumeSpecName: "config-data") pod "bf647836-f37f-448a-ac84-c610cf7c0125" (UID: "bf647836-f37f-448a-ac84-c610cf7c0125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.537760 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.537821 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.537837 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf647836-f37f-448a-ac84-c610cf7c0125-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:30 crc kubenswrapper[4772]: I1128 11:27:30.537848 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrrv6\" (UniqueName: \"kubernetes.io/projected/bf647836-f37f-448a-ac84-c610cf7c0125-kube-api-access-zrrv6\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.015347 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lpnr6" event={"ID":"bf647836-f37f-448a-ac84-c610cf7c0125","Type":"ContainerDied","Data":"07863ad84a54c9e1f1f1d332464d96124cd5fef98baecad0a9bc66979cb0ccce"} Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.015421 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07863ad84a54c9e1f1f1d332464d96124cd5fef98baecad0a9bc66979cb0ccce" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.015420 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lpnr6" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.109873 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 11:27:31 crc kubenswrapper[4772]: E1128 11:27:31.110374 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="923ff71f-a546-41a6-a825-d01d907c7763" containerName="init" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.110394 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="923ff71f-a546-41a6-a825-d01d907c7763" containerName="init" Nov 28 11:27:31 crc kubenswrapper[4772]: E1128 11:27:31.110419 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e87f3486-967b-47b7-87ff-b3c11d66e63c" containerName="nova-manage" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.110425 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e87f3486-967b-47b7-87ff-b3c11d66e63c" containerName="nova-manage" Nov 28 11:27:31 crc kubenswrapper[4772]: E1128 11:27:31.110442 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf647836-f37f-448a-ac84-c610cf7c0125" containerName="nova-cell1-conductor-db-sync" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.110449 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf647836-f37f-448a-ac84-c610cf7c0125" containerName="nova-cell1-conductor-db-sync" Nov 28 11:27:31 crc kubenswrapper[4772]: E1128 11:27:31.110460 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="923ff71f-a546-41a6-a825-d01d907c7763" containerName="dnsmasq-dns" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.110466 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="923ff71f-a546-41a6-a825-d01d907c7763" containerName="dnsmasq-dns" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.110686 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e87f3486-967b-47b7-87ff-b3c11d66e63c" containerName="nova-manage" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.110710 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="923ff71f-a546-41a6-a825-d01d907c7763" containerName="dnsmasq-dns" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.110723 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf647836-f37f-448a-ac84-c610cf7c0125" containerName="nova-cell1-conductor-db-sync" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.111429 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.120338 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.141547 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.161129 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfl8s\" (UniqueName: \"kubernetes.io/projected/4e7b45c1-3ed2-4693-b75b-faf73867de92-kube-api-access-zfl8s\") pod \"nova-cell1-conductor-0\" (UID: \"4e7b45c1-3ed2-4693-b75b-faf73867de92\") " pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.161487 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7b45c1-3ed2-4693-b75b-faf73867de92-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4e7b45c1-3ed2-4693-b75b-faf73867de92\") " pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.161662 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e7b45c1-3ed2-4693-b75b-faf73867de92-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4e7b45c1-3ed2-4693-b75b-faf73867de92\") " pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:31 crc kubenswrapper[4772]: E1128 11:27:31.258405 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 11:27:31 crc kubenswrapper[4772]: E1128 11:27:31.260769 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 11:27:31 crc kubenswrapper[4772]: E1128 11:27:31.263677 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 11:27:31 crc kubenswrapper[4772]: E1128 11:27:31.263788 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="580db61b-24f2-490e-93fa-d07d9a55d6fb" containerName="nova-scheduler-scheduler" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.264541 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfl8s\" (UniqueName: \"kubernetes.io/projected/4e7b45c1-3ed2-4693-b75b-faf73867de92-kube-api-access-zfl8s\") pod \"nova-cell1-conductor-0\" (UID: \"4e7b45c1-3ed2-4693-b75b-faf73867de92\") " pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.264688 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7b45c1-3ed2-4693-b75b-faf73867de92-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4e7b45c1-3ed2-4693-b75b-faf73867de92\") " pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.264789 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e7b45c1-3ed2-4693-b75b-faf73867de92-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4e7b45c1-3ed2-4693-b75b-faf73867de92\") " pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.276453 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7b45c1-3ed2-4693-b75b-faf73867de92-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4e7b45c1-3ed2-4693-b75b-faf73867de92\") " pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.277195 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e7b45c1-3ed2-4693-b75b-faf73867de92-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4e7b45c1-3ed2-4693-b75b-faf73867de92\") " pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.288088 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfl8s\" (UniqueName: \"kubernetes.io/projected/4e7b45c1-3ed2-4693-b75b-faf73867de92-kube-api-access-zfl8s\") pod \"nova-cell1-conductor-0\" (UID: \"4e7b45c1-3ed2-4693-b75b-faf73867de92\") " pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.439333 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:31 crc kubenswrapper[4772]: I1128 11:27:31.981041 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 11:27:32 crc kubenswrapper[4772]: I1128 11:27:32.043278 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4e7b45c1-3ed2-4693-b75b-faf73867de92","Type":"ContainerStarted","Data":"e107c6b264b521eb7a8bb4a9094370c81328498e966977ec09172518a7ee855a"} Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.061323 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4e7b45c1-3ed2-4693-b75b-faf73867de92","Type":"ContainerStarted","Data":"5474ce905b821bb3809c63c0453ddd1f1bb834c35dd37549f486f46f0110dc75"} Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.062450 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.083061 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.083035531 podStartE2EDuration="2.083035531s" podCreationTimestamp="2025-11-28 11:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:27:33.081580183 +0000 UTC m=+1251.404823440" watchObservedRunningTime="2025-11-28 11:27:33.083035531 +0000 UTC m=+1251.406278768" Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.777147 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.841501 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580db61b-24f2-490e-93fa-d07d9a55d6fb-config-data\") pod \"580db61b-24f2-490e-93fa-d07d9a55d6fb\" (UID: \"580db61b-24f2-490e-93fa-d07d9a55d6fb\") " Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.841618 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7zdw\" (UniqueName: \"kubernetes.io/projected/580db61b-24f2-490e-93fa-d07d9a55d6fb-kube-api-access-v7zdw\") pod \"580db61b-24f2-490e-93fa-d07d9a55d6fb\" (UID: \"580db61b-24f2-490e-93fa-d07d9a55d6fb\") " Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.841846 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580db61b-24f2-490e-93fa-d07d9a55d6fb-combined-ca-bundle\") pod \"580db61b-24f2-490e-93fa-d07d9a55d6fb\" (UID: \"580db61b-24f2-490e-93fa-d07d9a55d6fb\") " Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.847962 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580db61b-24f2-490e-93fa-d07d9a55d6fb-kube-api-access-v7zdw" (OuterVolumeSpecName: "kube-api-access-v7zdw") pod "580db61b-24f2-490e-93fa-d07d9a55d6fb" (UID: "580db61b-24f2-490e-93fa-d07d9a55d6fb"). InnerVolumeSpecName "kube-api-access-v7zdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.873541 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/580db61b-24f2-490e-93fa-d07d9a55d6fb-config-data" (OuterVolumeSpecName: "config-data") pod "580db61b-24f2-490e-93fa-d07d9a55d6fb" (UID: "580db61b-24f2-490e-93fa-d07d9a55d6fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.924312 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/580db61b-24f2-490e-93fa-d07d9a55d6fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "580db61b-24f2-490e-93fa-d07d9a55d6fb" (UID: "580db61b-24f2-490e-93fa-d07d9a55d6fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.945030 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580db61b-24f2-490e-93fa-d07d9a55d6fb-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.945480 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7zdw\" (UniqueName: \"kubernetes.io/projected/580db61b-24f2-490e-93fa-d07d9a55d6fb-kube-api-access-v7zdw\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:33 crc kubenswrapper[4772]: I1128 11:27:33.945575 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580db61b-24f2-490e-93fa-d07d9a55d6fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.079044 4772 generic.go:334] "Generic (PLEG): container finished" podID="3113148b-3177-44d4-b53a-3b80a848b180" containerID="cbc35f8fb6b97108128d753e8ca7dae2583ed5a69f3a03806ae7a85f5227623f" exitCode=0 Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.079122 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3113148b-3177-44d4-b53a-3b80a848b180","Type":"ContainerDied","Data":"cbc35f8fb6b97108128d753e8ca7dae2583ed5a69f3a03806ae7a85f5227623f"} Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.083155 4772 generic.go:334] "Generic (PLEG): container finished" podID="580db61b-24f2-490e-93fa-d07d9a55d6fb" containerID="26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da" exitCode=0 Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.083511 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"580db61b-24f2-490e-93fa-d07d9a55d6fb","Type":"ContainerDied","Data":"26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da"} Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.083548 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.083582 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"580db61b-24f2-490e-93fa-d07d9a55d6fb","Type":"ContainerDied","Data":"62835bf19eb6d7686f0757e54fa9047fab21d9d36f8095206593fef63a584643"} Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.083609 4772 scope.go:117] "RemoveContainer" containerID="26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.158672 4772 scope.go:117] "RemoveContainer" containerID="26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.160901 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:27:34 crc kubenswrapper[4772]: E1128 11:27:34.172498 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da\": container with ID starting with 26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da not found: ID does not exist" containerID="26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.172589 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da"} err="failed to get container status \"26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da\": rpc error: code = NotFound desc = could not find container \"26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da\": container with ID starting with 26ee1d2d439057441c72260feb82fe9e695bebf78917c407183e633ba29c81da not found: ID does not exist" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.183389 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.189847 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.195539 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:27:34 crc kubenswrapper[4772]: E1128 11:27:34.196002 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580db61b-24f2-490e-93fa-d07d9a55d6fb" containerName="nova-scheduler-scheduler" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.196021 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="580db61b-24f2-490e-93fa-d07d9a55d6fb" containerName="nova-scheduler-scheduler" Nov 28 11:27:34 crc kubenswrapper[4772]: E1128 11:27:34.196057 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3113148b-3177-44d4-b53a-3b80a848b180" containerName="nova-api-api" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.196067 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3113148b-3177-44d4-b53a-3b80a848b180" containerName="nova-api-api" Nov 28 11:27:34 crc kubenswrapper[4772]: E1128 11:27:34.196107 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3113148b-3177-44d4-b53a-3b80a848b180" containerName="nova-api-log" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.196116 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3113148b-3177-44d4-b53a-3b80a848b180" containerName="nova-api-log" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.196415 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3113148b-3177-44d4-b53a-3b80a848b180" containerName="nova-api-log" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.196447 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3113148b-3177-44d4-b53a-3b80a848b180" containerName="nova-api-api" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.196457 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="580db61b-24f2-490e-93fa-d07d9a55d6fb" containerName="nova-scheduler-scheduler" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.197386 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.199696 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.204658 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.280575 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndtcb\" (UniqueName: \"kubernetes.io/projected/3113148b-3177-44d4-b53a-3b80a848b180-kube-api-access-ndtcb\") pod \"3113148b-3177-44d4-b53a-3b80a848b180\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.281000 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3113148b-3177-44d4-b53a-3b80a848b180-logs\") pod \"3113148b-3177-44d4-b53a-3b80a848b180\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.281062 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-combined-ca-bundle\") pod \"3113148b-3177-44d4-b53a-3b80a848b180\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.281115 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-config-data\") pod \"3113148b-3177-44d4-b53a-3b80a848b180\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.281871 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3113148b-3177-44d4-b53a-3b80a848b180-logs" (OuterVolumeSpecName: "logs") pod "3113148b-3177-44d4-b53a-3b80a848b180" (UID: "3113148b-3177-44d4-b53a-3b80a848b180"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.286677 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3113148b-3177-44d4-b53a-3b80a848b180-kube-api-access-ndtcb" (OuterVolumeSpecName: "kube-api-access-ndtcb") pod "3113148b-3177-44d4-b53a-3b80a848b180" (UID: "3113148b-3177-44d4-b53a-3b80a848b180"). InnerVolumeSpecName "kube-api-access-ndtcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:27:34 crc kubenswrapper[4772]: E1128 11:27:34.313348 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-config-data podName:3113148b-3177-44d4-b53a-3b80a848b180 nodeName:}" failed. No retries permitted until 2025-11-28 11:27:34.813301248 +0000 UTC m=+1253.136544475 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-config-data") pod "3113148b-3177-44d4-b53a-3b80a848b180" (UID: "3113148b-3177-44d4-b53a-3b80a848b180") : error deleting /var/lib/kubelet/pods/3113148b-3177-44d4-b53a-3b80a848b180/volume-subpaths: remove /var/lib/kubelet/pods/3113148b-3177-44d4-b53a-3b80a848b180/volume-subpaths: no such file or directory Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.316291 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3113148b-3177-44d4-b53a-3b80a848b180" (UID: "3113148b-3177-44d4-b53a-3b80a848b180"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.383226 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-config-data\") pod \"nova-scheduler-0\" (UID: \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.383281 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2k7\" (UniqueName: \"kubernetes.io/projected/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-kube-api-access-tn2k7\") pod \"nova-scheduler-0\" (UID: \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.383618 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.384322 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndtcb\" (UniqueName: \"kubernetes.io/projected/3113148b-3177-44d4-b53a-3b80a848b180-kube-api-access-ndtcb\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.384430 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3113148b-3177-44d4-b53a-3b80a848b180-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.384523 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.486176 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-config-data\") pod \"nova-scheduler-0\" (UID: \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.486476 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn2k7\" (UniqueName: \"kubernetes.io/projected/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-kube-api-access-tn2k7\") pod \"nova-scheduler-0\" (UID: \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.486623 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.493000 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.494345 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-config-data\") pod \"nova-scheduler-0\" (UID: \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.516735 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn2k7\" (UniqueName: \"kubernetes.io/projected/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-kube-api-access-tn2k7\") pod \"nova-scheduler-0\" (UID: \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\") " pod="openstack/nova-scheduler-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.519905 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.894992 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-config-data\") pod \"3113148b-3177-44d4-b53a-3b80a848b180\" (UID: \"3113148b-3177-44d4-b53a-3b80a848b180\") " Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.901695 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-config-data" (OuterVolumeSpecName: "config-data") pod "3113148b-3177-44d4-b53a-3b80a848b180" (UID: "3113148b-3177-44d4-b53a-3b80a848b180"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:34 crc kubenswrapper[4772]: I1128 11:27:34.998008 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3113148b-3177-44d4-b53a-3b80a848b180-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:35 crc kubenswrapper[4772]: W1128 11:27:35.064442 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0570449e_b49f_49ef_b8a4_bc7a55a8fe14.slice/crio-63724dc8f4348b6c948c13c7458d6c330cf67c07979828e41ab67f07b462f333 WatchSource:0}: Error finding container 63724dc8f4348b6c948c13c7458d6c330cf67c07979828e41ab67f07b462f333: Status 404 returned error can't find the container with id 63724dc8f4348b6c948c13c7458d6c330cf67c07979828e41ab67f07b462f333 Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.069882 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.116767 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0570449e-b49f-49ef-b8a4-bc7a55a8fe14","Type":"ContainerStarted","Data":"63724dc8f4348b6c948c13c7458d6c330cf67c07979828e41ab67f07b462f333"} Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.126931 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3113148b-3177-44d4-b53a-3b80a848b180","Type":"ContainerDied","Data":"46d09a5f7141ee58904c998f41707f2e06a2e90fe2ab90b2ee9bebaeffbef841"} Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.127415 4772 scope.go:117] "RemoveContainer" containerID="cbc35f8fb6b97108128d753e8ca7dae2583ed5a69f3a03806ae7a85f5227623f" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.127105 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.177672 4772 scope.go:117] "RemoveContainer" containerID="d9b2f8cfd0f662bfc606ac72f944bc6fab6b20ce619a537af5fc029652d28e04" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.181551 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.194050 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.216000 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.218443 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.222230 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.226962 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.311315 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2944172d-3140-4ce7-83df-4b5fa866cd75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.312085 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2944172d-3140-4ce7-83df-4b5fa866cd75-logs\") pod \"nova-api-0\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.312165 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv6kb\" (UniqueName: \"kubernetes.io/projected/2944172d-3140-4ce7-83df-4b5fa866cd75-kube-api-access-tv6kb\") pod \"nova-api-0\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.312378 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2944172d-3140-4ce7-83df-4b5fa866cd75-config-data\") pod \"nova-api-0\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.416728 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2944172d-3140-4ce7-83df-4b5fa866cd75-logs\") pod \"nova-api-0\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.416857 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv6kb\" (UniqueName: \"kubernetes.io/projected/2944172d-3140-4ce7-83df-4b5fa866cd75-kube-api-access-tv6kb\") pod \"nova-api-0\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.416929 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2944172d-3140-4ce7-83df-4b5fa866cd75-config-data\") pod \"nova-api-0\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.416969 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2944172d-3140-4ce7-83df-4b5fa866cd75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.417385 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2944172d-3140-4ce7-83df-4b5fa866cd75-logs\") pod \"nova-api-0\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.422074 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2944172d-3140-4ce7-83df-4b5fa866cd75-config-data\") pod \"nova-api-0\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.423332 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2944172d-3140-4ce7-83df-4b5fa866cd75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.437265 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv6kb\" (UniqueName: \"kubernetes.io/projected/2944172d-3140-4ce7-83df-4b5fa866cd75-kube-api-access-tv6kb\") pod \"nova-api-0\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " pod="openstack/nova-api-0" Nov 28 11:27:35 crc kubenswrapper[4772]: I1128 11:27:35.538329 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:27:36 crc kubenswrapper[4772]: I1128 11:27:36.024248 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3113148b-3177-44d4-b53a-3b80a848b180" path="/var/lib/kubelet/pods/3113148b-3177-44d4-b53a-3b80a848b180/volumes" Nov 28 11:27:36 crc kubenswrapper[4772]: I1128 11:27:36.026854 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580db61b-24f2-490e-93fa-d07d9a55d6fb" path="/var/lib/kubelet/pods/580db61b-24f2-490e-93fa-d07d9a55d6fb/volumes" Nov 28 11:27:36 crc kubenswrapper[4772]: I1128 11:27:36.071678 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:27:36 crc kubenswrapper[4772]: W1128 11:27:36.075732 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2944172d_3140_4ce7_83df_4b5fa866cd75.slice/crio-d83d7d05fb7eee355625cba66685c94420f82b0f1103e156c3dbb7c575723a05 WatchSource:0}: Error finding container d83d7d05fb7eee355625cba66685c94420f82b0f1103e156c3dbb7c575723a05: Status 404 returned error can't find the container with id d83d7d05fb7eee355625cba66685c94420f82b0f1103e156c3dbb7c575723a05 Nov 28 11:27:36 crc kubenswrapper[4772]: I1128 11:27:36.153001 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2944172d-3140-4ce7-83df-4b5fa866cd75","Type":"ContainerStarted","Data":"d83d7d05fb7eee355625cba66685c94420f82b0f1103e156c3dbb7c575723a05"} Nov 28 11:27:36 crc kubenswrapper[4772]: I1128 11:27:36.157760 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0570449e-b49f-49ef-b8a4-bc7a55a8fe14","Type":"ContainerStarted","Data":"ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40"} Nov 28 11:27:37 crc kubenswrapper[4772]: I1128 11:27:37.205685 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2944172d-3140-4ce7-83df-4b5fa866cd75","Type":"ContainerStarted","Data":"d5f3b628a69f614fac08fb09b59ff4f75f20aec9e7dcb0fa6ea4f084dce6b1a0"} Nov 28 11:27:37 crc kubenswrapper[4772]: I1128 11:27:37.206141 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2944172d-3140-4ce7-83df-4b5fa866cd75","Type":"ContainerStarted","Data":"ec25988538de87c2c3e3f6a4ac4153e0aa2973567df280e4ccc43dc830820f89"} Nov 28 11:27:37 crc kubenswrapper[4772]: I1128 11:27:37.236042 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.2360208950000002 podStartE2EDuration="3.236020895s" podCreationTimestamp="2025-11-28 11:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:27:36.181742265 +0000 UTC m=+1254.504985582" watchObservedRunningTime="2025-11-28 11:27:37.236020895 +0000 UTC m=+1255.559264112" Nov 28 11:27:37 crc kubenswrapper[4772]: I1128 11:27:37.238579 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.238568891 podStartE2EDuration="2.238568891s" podCreationTimestamp="2025-11-28 11:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:27:37.233235023 +0000 UTC m=+1255.556478250" watchObservedRunningTime="2025-11-28 11:27:37.238568891 +0000 UTC m=+1255.561812118" Nov 28 11:27:39 crc kubenswrapper[4772]: I1128 11:27:39.520268 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 11:27:41 crc kubenswrapper[4772]: I1128 11:27:41.482513 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 28 11:27:44 crc kubenswrapper[4772]: I1128 11:27:44.520706 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 11:27:44 crc kubenswrapper[4772]: I1128 11:27:44.551104 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 11:27:44 crc kubenswrapper[4772]: I1128 11:27:44.679510 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 11:27:45 crc kubenswrapper[4772]: I1128 11:27:45.539442 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 11:27:45 crc kubenswrapper[4772]: I1128 11:27:45.539519 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 11:27:46 crc kubenswrapper[4772]: I1128 11:27:46.622647 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2944172d-3140-4ce7-83df-4b5fa866cd75" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 11:27:46 crc kubenswrapper[4772]: I1128 11:27:46.622766 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2944172d-3140-4ce7-83df-4b5fa866cd75" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 11:27:48 crc kubenswrapper[4772]: I1128 11:27:48.414653 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 11:27:52 crc kubenswrapper[4772]: E1128 11:27:52.205664 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa2cbac3_6468_4bf0_9929_29f9157fff5e.slice/crio-6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa2cbac3_6468_4bf0_9929_29f9157fff5e.slice/crio-conmon-6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f.scope\": RecentStats: unable to find data in memory cache]" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.245426 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.313443 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34e3a875-5116-44e0-9bd1-f5654a261a69-logs\") pod \"34e3a875-5116-44e0-9bd1-f5654a261a69\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.317164 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34e3a875-5116-44e0-9bd1-f5654a261a69-logs" (OuterVolumeSpecName: "logs") pod "34e3a875-5116-44e0-9bd1-f5654a261a69" (UID: "34e3a875-5116-44e0-9bd1-f5654a261a69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.317308 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e3a875-5116-44e0-9bd1-f5654a261a69-config-data\") pod \"34e3a875-5116-44e0-9bd1-f5654a261a69\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.317437 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz628\" (UniqueName: \"kubernetes.io/projected/34e3a875-5116-44e0-9bd1-f5654a261a69-kube-api-access-vz628\") pod \"34e3a875-5116-44e0-9bd1-f5654a261a69\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.317800 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e3a875-5116-44e0-9bd1-f5654a261a69-combined-ca-bundle\") pod \"34e3a875-5116-44e0-9bd1-f5654a261a69\" (UID: \"34e3a875-5116-44e0-9bd1-f5654a261a69\") " Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.318318 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34e3a875-5116-44e0-9bd1-f5654a261a69-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.342664 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e3a875-5116-44e0-9bd1-f5654a261a69-kube-api-access-vz628" (OuterVolumeSpecName: "kube-api-access-vz628") pod "34e3a875-5116-44e0-9bd1-f5654a261a69" (UID: "34e3a875-5116-44e0-9bd1-f5654a261a69"). InnerVolumeSpecName "kube-api-access-vz628". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.372699 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e3a875-5116-44e0-9bd1-f5654a261a69-config-data" (OuterVolumeSpecName: "config-data") pod "34e3a875-5116-44e0-9bd1-f5654a261a69" (UID: "34e3a875-5116-44e0-9bd1-f5654a261a69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.373391 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.414465 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e3a875-5116-44e0-9bd1-f5654a261a69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34e3a875-5116-44e0-9bd1-f5654a261a69" (UID: "34e3a875-5116-44e0-9bd1-f5654a261a69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.421300 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkcfb\" (UniqueName: \"kubernetes.io/projected/fa2cbac3-6468-4bf0-9929-29f9157fff5e-kube-api-access-fkcfb\") pod \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\" (UID: \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\") " Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.421577 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2cbac3-6468-4bf0-9929-29f9157fff5e-combined-ca-bundle\") pod \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\" (UID: \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\") " Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.421653 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2cbac3-6468-4bf0-9929-29f9157fff5e-config-data\") pod \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\" (UID: \"fa2cbac3-6468-4bf0-9929-29f9157fff5e\") " Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.422162 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e3a875-5116-44e0-9bd1-f5654a261a69-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.422188 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz628\" (UniqueName: \"kubernetes.io/projected/34e3a875-5116-44e0-9bd1-f5654a261a69-kube-api-access-vz628\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.422203 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e3a875-5116-44e0-9bd1-f5654a261a69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.439667 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa2cbac3-6468-4bf0-9929-29f9157fff5e-kube-api-access-fkcfb" (OuterVolumeSpecName: "kube-api-access-fkcfb") pod "fa2cbac3-6468-4bf0-9929-29f9157fff5e" (UID: "fa2cbac3-6468-4bf0-9929-29f9157fff5e"). InnerVolumeSpecName "kube-api-access-fkcfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.487569 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2cbac3-6468-4bf0-9929-29f9157fff5e-config-data" (OuterVolumeSpecName: "config-data") pod "fa2cbac3-6468-4bf0-9929-29f9157fff5e" (UID: "fa2cbac3-6468-4bf0-9929-29f9157fff5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.487705 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2cbac3-6468-4bf0-9929-29f9157fff5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa2cbac3-6468-4bf0-9929-29f9157fff5e" (UID: "fa2cbac3-6468-4bf0-9929-29f9157fff5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.525249 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2cbac3-6468-4bf0-9929-29f9157fff5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.525285 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2cbac3-6468-4bf0-9929-29f9157fff5e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.525294 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkcfb\" (UniqueName: \"kubernetes.io/projected/fa2cbac3-6468-4bf0-9929-29f9157fff5e-kube-api-access-fkcfb\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.766200 4772 generic.go:334] "Generic (PLEG): container finished" podID="fa2cbac3-6468-4bf0-9929-29f9157fff5e" containerID="6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f" exitCode=137 Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.766281 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa2cbac3-6468-4bf0-9929-29f9157fff5e","Type":"ContainerDied","Data":"6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f"} Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.766319 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa2cbac3-6468-4bf0-9929-29f9157fff5e","Type":"ContainerDied","Data":"33bf8ace0fd45a8cca62aefdc525b8c5a390313f7374ecba0fac0daaa51cd097"} Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.766343 4772 scope.go:117] "RemoveContainer" containerID="6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.766403 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.769470 4772 generic.go:334] "Generic (PLEG): container finished" podID="34e3a875-5116-44e0-9bd1-f5654a261a69" containerID="4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda" exitCode=137 Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.769530 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34e3a875-5116-44e0-9bd1-f5654a261a69","Type":"ContainerDied","Data":"4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda"} Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.769569 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34e3a875-5116-44e0-9bd1-f5654a261a69","Type":"ContainerDied","Data":"4bd4d2ca325f8353c999c937ca60b0fd387baa7eed89ebb37f49891e4a2a0016"} Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.769647 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.814088 4772 scope.go:117] "RemoveContainer" containerID="6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f" Nov 28 11:27:52 crc kubenswrapper[4772]: E1128 11:27:52.819024 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f\": container with ID starting with 6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f not found: ID does not exist" containerID="6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.819101 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f"} err="failed to get container status \"6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f\": rpc error: code = NotFound desc = could not find container \"6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f\": container with ID starting with 6565c549db96303774c6f923523d31847279a5fda9b0b4c52429b366e42ed63f not found: ID does not exist" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.819139 4772 scope.go:117] "RemoveContainer" containerID="4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.853400 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.873525 4772 scope.go:117] "RemoveContainer" containerID="068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.893503 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.929281 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.935792 4772 scope.go:117] "RemoveContainer" containerID="4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda" Nov 28 11:27:52 crc kubenswrapper[4772]: E1128 11:27:52.936586 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda\": container with ID starting with 4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda not found: ID does not exist" containerID="4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.936663 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda"} err="failed to get container status \"4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda\": rpc error: code = NotFound desc = could not find container \"4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda\": container with ID starting with 4d6d3ad85b9ba41543a419add92a6781ac8a4be3bb0d51c69f780c604eea4fda not found: ID does not exist" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.936707 4772 scope.go:117] "RemoveContainer" containerID="068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae" Nov 28 11:27:52 crc kubenswrapper[4772]: E1128 11:27:52.937139 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae\": container with ID starting with 068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae not found: ID does not exist" containerID="068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.937262 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae"} err="failed to get container status \"068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae\": rpc error: code = NotFound desc = could not find container \"068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae\": container with ID starting with 068afbab709df1f68bf171423eee85bce55241156243be381c9462329ab1ecae not found: ID does not exist" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.945479 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.955625 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 11:27:52 crc kubenswrapper[4772]: E1128 11:27:52.956306 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2cbac3-6468-4bf0-9929-29f9157fff5e" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.956339 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2cbac3-6468-4bf0-9929-29f9157fff5e" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 11:27:52 crc kubenswrapper[4772]: E1128 11:27:52.956381 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e3a875-5116-44e0-9bd1-f5654a261a69" containerName="nova-metadata-metadata" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.956391 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e3a875-5116-44e0-9bd1-f5654a261a69" containerName="nova-metadata-metadata" Nov 28 11:27:52 crc kubenswrapper[4772]: E1128 11:27:52.956445 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e3a875-5116-44e0-9bd1-f5654a261a69" containerName="nova-metadata-log" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.956455 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e3a875-5116-44e0-9bd1-f5654a261a69" containerName="nova-metadata-log" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.956705 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa2cbac3-6468-4bf0-9929-29f9157fff5e" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.956734 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e3a875-5116-44e0-9bd1-f5654a261a69" containerName="nova-metadata-log" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.956748 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e3a875-5116-44e0-9bd1-f5654a261a69" containerName="nova-metadata-metadata" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.957696 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.959836 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.959978 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.960213 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.975826 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.988858 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.991819 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.994780 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 11:27:52 crc kubenswrapper[4772]: I1128 11:27:52.995293 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.005819 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.043165 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.043257 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.043294 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.043342 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.043396 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a60d2c-4d5d-4fc0-895c-3368a0097268-logs\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.043431 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-config-data\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.043451 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpdjs\" (UniqueName: \"kubernetes.io/projected/08a60d2c-4d5d-4fc0-895c-3368a0097268-kube-api-access-cpdjs\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.043496 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.043580 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwwbb\" (UniqueName: \"kubernetes.io/projected/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-kube-api-access-dwwbb\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.043622 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.145192 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.145256 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.145298 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.145323 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a60d2c-4d5d-4fc0-895c-3368a0097268-logs\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.145352 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-config-data\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.145393 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpdjs\" (UniqueName: \"kubernetes.io/projected/08a60d2c-4d5d-4fc0-895c-3368a0097268-kube-api-access-cpdjs\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.145424 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.145471 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwwbb\" (UniqueName: \"kubernetes.io/projected/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-kube-api-access-dwwbb\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.145497 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.145545 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.149151 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a60d2c-4d5d-4fc0-895c-3368a0097268-logs\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.153013 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-config-data\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.153489 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.153812 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.154405 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.154557 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.155748 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.161161 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.163829 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpdjs\" (UniqueName: \"kubernetes.io/projected/08a60d2c-4d5d-4fc0-895c-3368a0097268-kube-api-access-cpdjs\") pod \"nova-metadata-0\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.167637 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwwbb\" (UniqueName: \"kubernetes.io/projected/9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a-kube-api-access-dwwbb\") pod \"nova-cell1-novncproxy-0\" (UID: \"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.281334 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.316172 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.826054 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 11:27:53 crc kubenswrapper[4772]: W1128 11:27:53.826574 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb843e3_1f1d_4d9d_8e2b_aa7d3cfc170a.slice/crio-d33f94e88559d57ead2262bb2e68878d5a3ec3efb9932dd30757c6d6ab2d4c0f WatchSource:0}: Error finding container d33f94e88559d57ead2262bb2e68878d5a3ec3efb9932dd30757c6d6ab2d4c0f: Status 404 returned error can't find the container with id d33f94e88559d57ead2262bb2e68878d5a3ec3efb9932dd30757c6d6ab2d4c0f Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.896835 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.896926 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.896993 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.907174 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da141ddbcec3414566d62a4d52bc506fe611de913cb5db4b82768e912f8c16c0"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.907962 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://da141ddbcec3414566d62a4d52bc506fe611de913cb5db4b82768e912f8c16c0" gracePeriod=600 Nov 28 11:27:53 crc kubenswrapper[4772]: I1128 11:27:53.924332 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.008929 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e3a875-5116-44e0-9bd1-f5654a261a69" path="/var/lib/kubelet/pods/34e3a875-5116-44e0-9bd1-f5654a261a69/volumes" Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.010280 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa2cbac3-6468-4bf0-9929-29f9157fff5e" path="/var/lib/kubelet/pods/fa2cbac3-6468-4bf0-9929-29f9157fff5e/volumes" Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.798386 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="da141ddbcec3414566d62a4d52bc506fe611de913cb5db4b82768e912f8c16c0" exitCode=0 Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.798552 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"da141ddbcec3414566d62a4d52bc506fe611de913cb5db4b82768e912f8c16c0"} Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.799075 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"4940b44c04e1b9bb359aa7bb4ce020bd708399be9f1195fe75c4b24f11ffd061"} Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.799094 4772 scope.go:117] "RemoveContainer" containerID="146105a3e2de3e49e98dafee8802eaebe7226a811726066f96e02933b7de92a2" Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.803777 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08a60d2c-4d5d-4fc0-895c-3368a0097268","Type":"ContainerStarted","Data":"c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af"} Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.803932 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08a60d2c-4d5d-4fc0-895c-3368a0097268","Type":"ContainerStarted","Data":"9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe"} Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.804029 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08a60d2c-4d5d-4fc0-895c-3368a0097268","Type":"ContainerStarted","Data":"4a900b2ce398c0eb1ac24c6e8b4bb2f806d9e509977b7ddb24c50eed3a7a5ab3"} Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.806447 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a","Type":"ContainerStarted","Data":"8a5c94da500e122e1a0930a41f837af1037b61c8c5b53ab7cd43c3b8962a1769"} Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.806475 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a","Type":"ContainerStarted","Data":"d33f94e88559d57ead2262bb2e68878d5a3ec3efb9932dd30757c6d6ab2d4c0f"} Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.895285 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.895252611 podStartE2EDuration="2.895252611s" podCreationTimestamp="2025-11-28 11:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:27:54.858813418 +0000 UTC m=+1273.182056655" watchObservedRunningTime="2025-11-28 11:27:54.895252611 +0000 UTC m=+1273.218495838" Nov 28 11:27:54 crc kubenswrapper[4772]: I1128 11:27:54.903500 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.903489775 podStartE2EDuration="2.903489775s" podCreationTimestamp="2025-11-28 11:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:27:54.884200385 +0000 UTC m=+1273.207443612" watchObservedRunningTime="2025-11-28 11:27:54.903489775 +0000 UTC m=+1273.226733002" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.543834 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.544295 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.544763 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.545097 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.548198 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.548269 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.840574 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-sqvg4"] Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.842906 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.878963 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-sqvg4"] Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.942452 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.942545 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-config\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.942567 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.942639 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.942676 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:55 crc kubenswrapper[4772]: I1128 11:27:55.942767 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqttr\" (UniqueName: \"kubernetes.io/projected/16d9e80f-25f4-4cc3-b8f7-442760eff45c-kube-api-access-cqttr\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.046559 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.046615 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.046700 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqttr\" (UniqueName: \"kubernetes.io/projected/16d9e80f-25f4-4cc3-b8f7-442760eff45c-kube-api-access-cqttr\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.046777 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.046803 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-config\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.046840 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.048469 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.052423 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-config\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.052770 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.053082 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.053331 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.075868 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqttr\" (UniqueName: \"kubernetes.io/projected/16d9e80f-25f4-4cc3-b8f7-442760eff45c-kube-api-access-cqttr\") pod \"dnsmasq-dns-cd5cbd7b9-sqvg4\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.179590 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.525016 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-sqvg4"] Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.874335 4772 generic.go:334] "Generic (PLEG): container finished" podID="16d9e80f-25f4-4cc3-b8f7-442760eff45c" containerID="14b44dbe71b78f8f96e38a2d77be5e7be6d70989b2bedea7a90be1fb42e361f8" exitCode=0 Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.874553 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" event={"ID":"16d9e80f-25f4-4cc3-b8f7-442760eff45c","Type":"ContainerDied","Data":"14b44dbe71b78f8f96e38a2d77be5e7be6d70989b2bedea7a90be1fb42e361f8"} Nov 28 11:27:56 crc kubenswrapper[4772]: I1128 11:27:56.874893 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" event={"ID":"16d9e80f-25f4-4cc3-b8f7-442760eff45c","Type":"ContainerStarted","Data":"50db7dc54298c84c7c127a4c0649fcfe83c030e3fd15d5c4f6935544d36a830f"} Nov 28 11:27:57 crc kubenswrapper[4772]: I1128 11:27:57.756610 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:27:57 crc kubenswrapper[4772]: I1128 11:27:57.757153 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="ceilometer-central-agent" containerID="cri-o://fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1" gracePeriod=30 Nov 28 11:27:57 crc kubenswrapper[4772]: I1128 11:27:57.757260 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="sg-core" containerID="cri-o://075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f" gracePeriod=30 Nov 28 11:27:57 crc kubenswrapper[4772]: I1128 11:27:57.757269 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="ceilometer-notification-agent" containerID="cri-o://e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941" gracePeriod=30 Nov 28 11:27:57 crc kubenswrapper[4772]: I1128 11:27:57.757548 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="proxy-httpd" containerID="cri-o://318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2" gracePeriod=30 Nov 28 11:27:57 crc kubenswrapper[4772]: I1128 11:27:57.890842 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" event={"ID":"16d9e80f-25f4-4cc3-b8f7-442760eff45c","Type":"ContainerStarted","Data":"fb5d9a0b7691beb8cfdb4d3b578fa211b68d76395094beff985b11d9b4b6a81c"} Nov 28 11:27:57 crc kubenswrapper[4772]: I1128 11:27:57.891373 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:27:57 crc kubenswrapper[4772]: I1128 11:27:57.917932 4772 generic.go:334] "Generic (PLEG): container finished" podID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerID="075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f" exitCode=2 Nov 28 11:27:57 crc kubenswrapper[4772]: I1128 11:27:57.917993 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21a2205c-7bba-4f48-b2f7-03196fa277ac","Type":"ContainerDied","Data":"075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f"} Nov 28 11:27:57 crc kubenswrapper[4772]: I1128 11:27:57.961594 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" podStartSLOduration=2.961561315 podStartE2EDuration="2.961561315s" podCreationTimestamp="2025-11-28 11:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:27:57.942975094 +0000 UTC m=+1276.266218311" watchObservedRunningTime="2025-11-28 11:27:57.961561315 +0000 UTC m=+1276.284804542" Nov 28 11:27:58 crc kubenswrapper[4772]: I1128 11:27:58.281840 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:27:58 crc kubenswrapper[4772]: I1128 11:27:58.317284 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 11:27:58 crc kubenswrapper[4772]: I1128 11:27:58.317387 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 11:27:58 crc kubenswrapper[4772]: I1128 11:27:58.562420 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:27:58 crc kubenswrapper[4772]: I1128 11:27:58.562750 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2944172d-3140-4ce7-83df-4b5fa866cd75" containerName="nova-api-log" containerID="cri-o://ec25988538de87c2c3e3f6a4ac4153e0aa2973567df280e4ccc43dc830820f89" gracePeriod=30 Nov 28 11:27:58 crc kubenswrapper[4772]: I1128 11:27:58.562848 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2944172d-3140-4ce7-83df-4b5fa866cd75" containerName="nova-api-api" containerID="cri-o://d5f3b628a69f614fac08fb09b59ff4f75f20aec9e7dcb0fa6ea4f084dce6b1a0" gracePeriod=30 Nov 28 11:27:58 crc kubenswrapper[4772]: I1128 11:27:58.933415 4772 generic.go:334] "Generic (PLEG): container finished" podID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerID="318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2" exitCode=0 Nov 28 11:27:58 crc kubenswrapper[4772]: I1128 11:27:58.933815 4772 generic.go:334] "Generic (PLEG): container finished" podID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerID="fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1" exitCode=0 Nov 28 11:27:58 crc kubenswrapper[4772]: I1128 11:27:58.933865 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21a2205c-7bba-4f48-b2f7-03196fa277ac","Type":"ContainerDied","Data":"318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2"} Nov 28 11:27:58 crc kubenswrapper[4772]: I1128 11:27:58.933901 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21a2205c-7bba-4f48-b2f7-03196fa277ac","Type":"ContainerDied","Data":"fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1"} Nov 28 11:27:58 crc kubenswrapper[4772]: I1128 11:27:58.938483 4772 generic.go:334] "Generic (PLEG): container finished" podID="2944172d-3140-4ce7-83df-4b5fa866cd75" containerID="ec25988538de87c2c3e3f6a4ac4153e0aa2973567df280e4ccc43dc830820f89" exitCode=143 Nov 28 11:27:58 crc kubenswrapper[4772]: I1128 11:27:58.939800 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2944172d-3140-4ce7-83df-4b5fa866cd75","Type":"ContainerDied","Data":"ec25988538de87c2c3e3f6a4ac4153e0aa2973567df280e4ccc43dc830820f89"} Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.730495 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.823170 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21a2205c-7bba-4f48-b2f7-03196fa277ac-run-httpd\") pod \"21a2205c-7bba-4f48-b2f7-03196fa277ac\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.823273 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-combined-ca-bundle\") pod \"21a2205c-7bba-4f48-b2f7-03196fa277ac\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.823296 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-ceilometer-tls-certs\") pod \"21a2205c-7bba-4f48-b2f7-03196fa277ac\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.823475 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-config-data\") pod \"21a2205c-7bba-4f48-b2f7-03196fa277ac\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.823496 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-scripts\") pod \"21a2205c-7bba-4f48-b2f7-03196fa277ac\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.823618 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21a2205c-7bba-4f48-b2f7-03196fa277ac-log-httpd\") pod \"21a2205c-7bba-4f48-b2f7-03196fa277ac\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.824584 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21a2205c-7bba-4f48-b2f7-03196fa277ac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "21a2205c-7bba-4f48-b2f7-03196fa277ac" (UID: "21a2205c-7bba-4f48-b2f7-03196fa277ac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.824846 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21a2205c-7bba-4f48-b2f7-03196fa277ac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "21a2205c-7bba-4f48-b2f7-03196fa277ac" (UID: "21a2205c-7bba-4f48-b2f7-03196fa277ac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.834507 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-scripts" (OuterVolumeSpecName: "scripts") pod "21a2205c-7bba-4f48-b2f7-03196fa277ac" (UID: "21a2205c-7bba-4f48-b2f7-03196fa277ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.901464 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "21a2205c-7bba-4f48-b2f7-03196fa277ac" (UID: "21a2205c-7bba-4f48-b2f7-03196fa277ac"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.925995 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfpjs\" (UniqueName: \"kubernetes.io/projected/21a2205c-7bba-4f48-b2f7-03196fa277ac-kube-api-access-hfpjs\") pod \"21a2205c-7bba-4f48-b2f7-03196fa277ac\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.926117 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-sg-core-conf-yaml\") pod \"21a2205c-7bba-4f48-b2f7-03196fa277ac\" (UID: \"21a2205c-7bba-4f48-b2f7-03196fa277ac\") " Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.926650 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21a2205c-7bba-4f48-b2f7-03196fa277ac-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.926677 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21a2205c-7bba-4f48-b2f7-03196fa277ac-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.926691 4772 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.926703 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.930586 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21a2205c-7bba-4f48-b2f7-03196fa277ac-kube-api-access-hfpjs" (OuterVolumeSpecName: "kube-api-access-hfpjs") pod "21a2205c-7bba-4f48-b2f7-03196fa277ac" (UID: "21a2205c-7bba-4f48-b2f7-03196fa277ac"). InnerVolumeSpecName "kube-api-access-hfpjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.965288 4772 generic.go:334] "Generic (PLEG): container finished" podID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerID="e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941" exitCode=0 Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.965344 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21a2205c-7bba-4f48-b2f7-03196fa277ac","Type":"ContainerDied","Data":"e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941"} Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.965395 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21a2205c-7bba-4f48-b2f7-03196fa277ac","Type":"ContainerDied","Data":"caa8acf20ebfa79e910043e9b0aa172e39e8fcccd62a1045ea76c31b00807aa0"} Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.965415 4772 scope.go:117] "RemoveContainer" containerID="318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.965644 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.967470 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21a2205c-7bba-4f48-b2f7-03196fa277ac" (UID: "21a2205c-7bba-4f48-b2f7-03196fa277ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.988174 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "21a2205c-7bba-4f48-b2f7-03196fa277ac" (UID: "21a2205c-7bba-4f48-b2f7-03196fa277ac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:27:59 crc kubenswrapper[4772]: I1128 11:27:59.998248 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-config-data" (OuterVolumeSpecName: "config-data") pod "21a2205c-7bba-4f48-b2f7-03196fa277ac" (UID: "21a2205c-7bba-4f48-b2f7-03196fa277ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.000309 4772 scope.go:117] "RemoveContainer" containerID="075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.027206 4772 scope.go:117] "RemoveContainer" containerID="e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.028661 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.028697 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfpjs\" (UniqueName: \"kubernetes.io/projected/21a2205c-7bba-4f48-b2f7-03196fa277ac-kube-api-access-hfpjs\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.028708 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.028717 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a2205c-7bba-4f48-b2f7-03196fa277ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.051925 4772 scope.go:117] "RemoveContainer" containerID="fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.081772 4772 scope.go:117] "RemoveContainer" containerID="318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2" Nov 28 11:28:00 crc kubenswrapper[4772]: E1128 11:28:00.082276 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2\": container with ID starting with 318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2 not found: ID does not exist" containerID="318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.082307 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2"} err="failed to get container status \"318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2\": rpc error: code = NotFound desc = could not find container \"318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2\": container with ID starting with 318b495740122cc74bd0c7323140f8bef3f7db6cb1bfbcfdf5970c3b6422f1d2 not found: ID does not exist" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.082335 4772 scope.go:117] "RemoveContainer" containerID="075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f" Nov 28 11:28:00 crc kubenswrapper[4772]: E1128 11:28:00.082809 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f\": container with ID starting with 075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f not found: ID does not exist" containerID="075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.082831 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f"} err="failed to get container status \"075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f\": rpc error: code = NotFound desc = could not find container \"075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f\": container with ID starting with 075ed32bb6b21e0acdbfe74c9f067351e6dd615261fd0085f319aaf621a6255f not found: ID does not exist" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.082844 4772 scope.go:117] "RemoveContainer" containerID="e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941" Nov 28 11:28:00 crc kubenswrapper[4772]: E1128 11:28:00.083212 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941\": container with ID starting with e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941 not found: ID does not exist" containerID="e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.083230 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941"} err="failed to get container status \"e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941\": rpc error: code = NotFound desc = could not find container \"e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941\": container with ID starting with e6f5cf45f4f34ea85fedda0107ce488e9202727324a8f860867bf686354b2941 not found: ID does not exist" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.083244 4772 scope.go:117] "RemoveContainer" containerID="fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1" Nov 28 11:28:00 crc kubenswrapper[4772]: E1128 11:28:00.083656 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1\": container with ID starting with fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1 not found: ID does not exist" containerID="fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.083675 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1"} err="failed to get container status \"fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1\": rpc error: code = NotFound desc = could not find container \"fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1\": container with ID starting with fe7fd7df5705c5a6283e2cefdcf631241a9a16ffd2d09ed0e42866eb296db7a1 not found: ID does not exist" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.297790 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.307996 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.343209 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:28:00 crc kubenswrapper[4772]: E1128 11:28:00.343748 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="ceilometer-notification-agent" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.343780 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="ceilometer-notification-agent" Nov 28 11:28:00 crc kubenswrapper[4772]: E1128 11:28:00.343815 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="sg-core" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.343825 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="sg-core" Nov 28 11:28:00 crc kubenswrapper[4772]: E1128 11:28:00.343835 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="ceilometer-central-agent" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.343843 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="ceilometer-central-agent" Nov 28 11:28:00 crc kubenswrapper[4772]: E1128 11:28:00.343882 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="proxy-httpd" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.343889 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="proxy-httpd" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.344160 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="proxy-httpd" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.344194 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="ceilometer-central-agent" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.344214 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="ceilometer-notification-agent" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.344227 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" containerName="sg-core" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.346050 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.349768 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.350422 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.355954 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.401676 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.438944 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.439147 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.439456 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d55dc2a-8d2f-4f27-82ef-11744255c40c-log-httpd\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.439560 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d55dc2a-8d2f-4f27-82ef-11744255c40c-run-httpd\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.439586 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.439786 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc5q8\" (UniqueName: \"kubernetes.io/projected/3d55dc2a-8d2f-4f27-82ef-11744255c40c-kube-api-access-sc5q8\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.439973 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-scripts\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.440027 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-config-data\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.542335 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc5q8\" (UniqueName: \"kubernetes.io/projected/3d55dc2a-8d2f-4f27-82ef-11744255c40c-kube-api-access-sc5q8\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.542430 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-scripts\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.542460 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-config-data\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.542486 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.542552 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.542603 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d55dc2a-8d2f-4f27-82ef-11744255c40c-log-httpd\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.542632 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d55dc2a-8d2f-4f27-82ef-11744255c40c-run-httpd\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.542647 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.543309 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d55dc2a-8d2f-4f27-82ef-11744255c40c-log-httpd\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.543882 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d55dc2a-8d2f-4f27-82ef-11744255c40c-run-httpd\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.547717 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-scripts\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.548906 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.549924 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-config-data\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.550506 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.556525 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d55dc2a-8d2f-4f27-82ef-11744255c40c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.564203 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc5q8\" (UniqueName: \"kubernetes.io/projected/3d55dc2a-8d2f-4f27-82ef-11744255c40c-kube-api-access-sc5q8\") pod \"ceilometer-0\" (UID: \"3d55dc2a-8d2f-4f27-82ef-11744255c40c\") " pod="openstack/ceilometer-0" Nov 28 11:28:00 crc kubenswrapper[4772]: I1128 11:28:00.705706 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 11:28:01 crc kubenswrapper[4772]: I1128 11:28:01.188975 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 11:28:01 crc kubenswrapper[4772]: W1128 11:28:01.203146 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d55dc2a_8d2f_4f27_82ef_11744255c40c.slice/crio-cfe971509ecca9d6ceeeb179cdcd819f62eaf2f703afc40e30619e8d0cacba0c WatchSource:0}: Error finding container cfe971509ecca9d6ceeeb179cdcd819f62eaf2f703afc40e30619e8d0cacba0c: Status 404 returned error can't find the container with id cfe971509ecca9d6ceeeb179cdcd819f62eaf2f703afc40e30619e8d0cacba0c Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.009715 4772 generic.go:334] "Generic (PLEG): container finished" podID="2944172d-3140-4ce7-83df-4b5fa866cd75" containerID="d5f3b628a69f614fac08fb09b59ff4f75f20aec9e7dcb0fa6ea4f084dce6b1a0" exitCode=0 Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.065256 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21a2205c-7bba-4f48-b2f7-03196fa277ac" path="/var/lib/kubelet/pods/21a2205c-7bba-4f48-b2f7-03196fa277ac/volumes" Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.066319 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2944172d-3140-4ce7-83df-4b5fa866cd75","Type":"ContainerDied","Data":"d5f3b628a69f614fac08fb09b59ff4f75f20aec9e7dcb0fa6ea4f084dce6b1a0"} Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.066368 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d55dc2a-8d2f-4f27-82ef-11744255c40c","Type":"ContainerStarted","Data":"cfe971509ecca9d6ceeeb179cdcd819f62eaf2f703afc40e30619e8d0cacba0c"} Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.225918 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.393029 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2944172d-3140-4ce7-83df-4b5fa866cd75-config-data\") pod \"2944172d-3140-4ce7-83df-4b5fa866cd75\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.393416 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2944172d-3140-4ce7-83df-4b5fa866cd75-logs\") pod \"2944172d-3140-4ce7-83df-4b5fa866cd75\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.393539 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2944172d-3140-4ce7-83df-4b5fa866cd75-combined-ca-bundle\") pod \"2944172d-3140-4ce7-83df-4b5fa866cd75\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.396507 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv6kb\" (UniqueName: \"kubernetes.io/projected/2944172d-3140-4ce7-83df-4b5fa866cd75-kube-api-access-tv6kb\") pod \"2944172d-3140-4ce7-83df-4b5fa866cd75\" (UID: \"2944172d-3140-4ce7-83df-4b5fa866cd75\") " Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.400435 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2944172d-3140-4ce7-83df-4b5fa866cd75-logs" (OuterVolumeSpecName: "logs") pod "2944172d-3140-4ce7-83df-4b5fa866cd75" (UID: "2944172d-3140-4ce7-83df-4b5fa866cd75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.496587 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2944172d-3140-4ce7-83df-4b5fa866cd75-config-data" (OuterVolumeSpecName: "config-data") pod "2944172d-3140-4ce7-83df-4b5fa866cd75" (UID: "2944172d-3140-4ce7-83df-4b5fa866cd75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.500993 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2944172d-3140-4ce7-83df-4b5fa866cd75-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.501035 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2944172d-3140-4ce7-83df-4b5fa866cd75-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.504561 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2944172d-3140-4ce7-83df-4b5fa866cd75-kube-api-access-tv6kb" (OuterVolumeSpecName: "kube-api-access-tv6kb") pod "2944172d-3140-4ce7-83df-4b5fa866cd75" (UID: "2944172d-3140-4ce7-83df-4b5fa866cd75"). InnerVolumeSpecName "kube-api-access-tv6kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.589491 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2944172d-3140-4ce7-83df-4b5fa866cd75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2944172d-3140-4ce7-83df-4b5fa866cd75" (UID: "2944172d-3140-4ce7-83df-4b5fa866cd75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.607863 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2944172d-3140-4ce7-83df-4b5fa866cd75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:02 crc kubenswrapper[4772]: I1128 11:28:02.607899 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv6kb\" (UniqueName: \"kubernetes.io/projected/2944172d-3140-4ce7-83df-4b5fa866cd75-kube-api-access-tv6kb\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.031308 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d55dc2a-8d2f-4f27-82ef-11744255c40c","Type":"ContainerStarted","Data":"8f3ed78f5b7aaf32c35f202ab31b156bf7ce1375605e396f46237c7cfb65289b"} Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.038075 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2944172d-3140-4ce7-83df-4b5fa866cd75","Type":"ContainerDied","Data":"d83d7d05fb7eee355625cba66685c94420f82b0f1103e156c3dbb7c575723a05"} Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.038137 4772 scope.go:117] "RemoveContainer" containerID="d5f3b628a69f614fac08fb09b59ff4f75f20aec9e7dcb0fa6ea4f084dce6b1a0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.038286 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.078663 4772 scope.go:117] "RemoveContainer" containerID="ec25988538de87c2c3e3f6a4ac4153e0aa2973567df280e4ccc43dc830820f89" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.094195 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.113760 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.129880 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 11:28:03 crc kubenswrapper[4772]: E1128 11:28:03.130478 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2944172d-3140-4ce7-83df-4b5fa866cd75" containerName="nova-api-api" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.130495 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2944172d-3140-4ce7-83df-4b5fa866cd75" containerName="nova-api-api" Nov 28 11:28:03 crc kubenswrapper[4772]: E1128 11:28:03.130509 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2944172d-3140-4ce7-83df-4b5fa866cd75" containerName="nova-api-log" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.130516 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2944172d-3140-4ce7-83df-4b5fa866cd75" containerName="nova-api-log" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.130764 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2944172d-3140-4ce7-83df-4b5fa866cd75" containerName="nova-api-api" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.130788 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2944172d-3140-4ce7-83df-4b5fa866cd75" containerName="nova-api-log" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.132160 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.137261 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.138343 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.141058 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.147105 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.225117 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-public-tls-certs\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.225270 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.225315 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-config-data\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.225414 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7x2n\" (UniqueName: \"kubernetes.io/projected/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-kube-api-access-h7x2n\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.225568 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-logs\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.225777 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.282689 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.313818 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.317108 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.317166 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.329603 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-logs\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.329800 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.329925 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-public-tls-certs\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.330005 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.330035 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-config-data\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.330107 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7x2n\" (UniqueName: \"kubernetes.io/projected/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-kube-api-access-h7x2n\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.330611 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-logs\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.340876 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-public-tls-certs\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.340926 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.341487 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-config-data\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.341680 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.353827 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7x2n\" (UniqueName: \"kubernetes.io/projected/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-kube-api-access-h7x2n\") pod \"nova-api-0\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.456796 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:28:03 crc kubenswrapper[4772]: I1128 11:28:03.789802 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.006942 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2944172d-3140-4ce7-83df-4b5fa866cd75" path="/var/lib/kubelet/pods/2944172d-3140-4ce7-83df-4b5fa866cd75/volumes" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.053737 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500","Type":"ContainerStarted","Data":"8a0be4b40d5f72facb48e31b880a517a850c07929a168cb5bd3c0d26457a3fab"} Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.062509 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d55dc2a-8d2f-4f27-82ef-11744255c40c","Type":"ContainerStarted","Data":"6d57155e209a5dbb81cbb5ca9220ebb65637cf1a2a918ae17f52f045bfe80abf"} Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.062586 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d55dc2a-8d2f-4f27-82ef-11744255c40c","Type":"ContainerStarted","Data":"4fe43afec73e6263aa769cf5ad09cc739c53805f964a0dc07d8325a89e5c2807"} Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.093996 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.331629 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.332019 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.425412 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bj2mg"] Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.427307 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.432010 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.439623 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bj2mg"] Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.459397 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.575487 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-config-data\") pod \"nova-cell1-cell-mapping-bj2mg\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.575571 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-scripts\") pod \"nova-cell1-cell-mapping-bj2mg\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.575614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bj2mg\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.575779 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp9tt\" (UniqueName: \"kubernetes.io/projected/97b1f57a-1490-46e4-82db-913a03cf8750-kube-api-access-gp9tt\") pod \"nova-cell1-cell-mapping-bj2mg\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.678251 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-config-data\") pod \"nova-cell1-cell-mapping-bj2mg\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.678335 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-scripts\") pod \"nova-cell1-cell-mapping-bj2mg\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.678422 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bj2mg\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.679604 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp9tt\" (UniqueName: \"kubernetes.io/projected/97b1f57a-1490-46e4-82db-913a03cf8750-kube-api-access-gp9tt\") pod \"nova-cell1-cell-mapping-bj2mg\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.690145 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-config-data\") pod \"nova-cell1-cell-mapping-bj2mg\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.695185 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bj2mg\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.697941 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-scripts\") pod \"nova-cell1-cell-mapping-bj2mg\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.698959 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp9tt\" (UniqueName: \"kubernetes.io/projected/97b1f57a-1490-46e4-82db-913a03cf8750-kube-api-access-gp9tt\") pod \"nova-cell1-cell-mapping-bj2mg\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:04 crc kubenswrapper[4772]: I1128 11:28:04.760094 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:05 crc kubenswrapper[4772]: I1128 11:28:05.082827 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500","Type":"ContainerStarted","Data":"ef06d9700d3aba66a3fd785268364accc4e676a19b00115f35298fd1915a9717"} Nov 28 11:28:05 crc kubenswrapper[4772]: I1128 11:28:05.083222 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500","Type":"ContainerStarted","Data":"aa04d3ef391ef8684dd7480dc67e0a71b442c014e4d6a7d02a90701358f42647"} Nov 28 11:28:05 crc kubenswrapper[4772]: I1128 11:28:05.126017 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.125989523 podStartE2EDuration="2.125989523s" podCreationTimestamp="2025-11-28 11:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:28:05.100166125 +0000 UTC m=+1283.423409352" watchObservedRunningTime="2025-11-28 11:28:05.125989523 +0000 UTC m=+1283.449232750" Nov 28 11:28:05 crc kubenswrapper[4772]: W1128 11:28:05.383093 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97b1f57a_1490_46e4_82db_913a03cf8750.slice/crio-5bf52b31468b0ec3e20bcb2c9ff76aa7021c42b55fbc7ad591c50ca25f82d492 WatchSource:0}: Error finding container 5bf52b31468b0ec3e20bcb2c9ff76aa7021c42b55fbc7ad591c50ca25f82d492: Status 404 returned error can't find the container with id 5bf52b31468b0ec3e20bcb2c9ff76aa7021c42b55fbc7ad591c50ca25f82d492 Nov 28 11:28:05 crc kubenswrapper[4772]: I1128 11:28:05.388043 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bj2mg"] Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.116886 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d55dc2a-8d2f-4f27-82ef-11744255c40c","Type":"ContainerStarted","Data":"caf7f8ad16268db0e96c241d679e651f557d80d08e8736b12ba6fded2a071bb4"} Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.117300 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.131178 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bj2mg" event={"ID":"97b1f57a-1490-46e4-82db-913a03cf8750","Type":"ContainerStarted","Data":"40433f1908faf255c96bc8ebf48da837755e6ec15f2cb47eeb4573cb1ad41e51"} Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.131237 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bj2mg" event={"ID":"97b1f57a-1490-46e4-82db-913a03cf8750","Type":"ContainerStarted","Data":"5bf52b31468b0ec3e20bcb2c9ff76aa7021c42b55fbc7ad591c50ca25f82d492"} Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.159016 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.177116181 podStartE2EDuration="6.158996304s" podCreationTimestamp="2025-11-28 11:28:00 +0000 UTC" firstStartedPulling="2025-11-28 11:28:01.209172404 +0000 UTC m=+1279.532415621" lastFinishedPulling="2025-11-28 11:28:05.191052517 +0000 UTC m=+1283.514295744" observedRunningTime="2025-11-28 11:28:06.151218782 +0000 UTC m=+1284.474462029" watchObservedRunningTime="2025-11-28 11:28:06.158996304 +0000 UTC m=+1284.482239531" Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.185583 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.195286 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bj2mg" podStartSLOduration=2.195263353 podStartE2EDuration="2.195263353s" podCreationTimestamp="2025-11-28 11:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:28:06.193514637 +0000 UTC m=+1284.516757884" watchObservedRunningTime="2025-11-28 11:28:06.195263353 +0000 UTC m=+1284.518506580" Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.276404 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-vrxt8"] Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.276687 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" podUID="472f1041-5c63-4f6d-997c-db8b89dfaacf" containerName="dnsmasq-dns" containerID="cri-o://8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8" gracePeriod=10 Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.822596 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.945928 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-ovsdbserver-sb\") pod \"472f1041-5c63-4f6d-997c-db8b89dfaacf\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.945977 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-dns-swift-storage-0\") pod \"472f1041-5c63-4f6d-997c-db8b89dfaacf\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.946108 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-config\") pod \"472f1041-5c63-4f6d-997c-db8b89dfaacf\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.946219 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwqbh\" (UniqueName: \"kubernetes.io/projected/472f1041-5c63-4f6d-997c-db8b89dfaacf-kube-api-access-kwqbh\") pod \"472f1041-5c63-4f6d-997c-db8b89dfaacf\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.946256 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-ovsdbserver-nb\") pod \"472f1041-5c63-4f6d-997c-db8b89dfaacf\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.946315 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-dns-svc\") pod \"472f1041-5c63-4f6d-997c-db8b89dfaacf\" (UID: \"472f1041-5c63-4f6d-997c-db8b89dfaacf\") " Nov 28 11:28:06 crc kubenswrapper[4772]: I1128 11:28:06.972267 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472f1041-5c63-4f6d-997c-db8b89dfaacf-kube-api-access-kwqbh" (OuterVolumeSpecName: "kube-api-access-kwqbh") pod "472f1041-5c63-4f6d-997c-db8b89dfaacf" (UID: "472f1041-5c63-4f6d-997c-db8b89dfaacf"). InnerVolumeSpecName "kube-api-access-kwqbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.021913 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "472f1041-5c63-4f6d-997c-db8b89dfaacf" (UID: "472f1041-5c63-4f6d-997c-db8b89dfaacf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.024353 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "472f1041-5c63-4f6d-997c-db8b89dfaacf" (UID: "472f1041-5c63-4f6d-997c-db8b89dfaacf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.025295 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-config" (OuterVolumeSpecName: "config") pod "472f1041-5c63-4f6d-997c-db8b89dfaacf" (UID: "472f1041-5c63-4f6d-997c-db8b89dfaacf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.030235 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "472f1041-5c63-4f6d-997c-db8b89dfaacf" (UID: "472f1041-5c63-4f6d-997c-db8b89dfaacf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.032800 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "472f1041-5c63-4f6d-997c-db8b89dfaacf" (UID: "472f1041-5c63-4f6d-997c-db8b89dfaacf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.048326 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwqbh\" (UniqueName: \"kubernetes.io/projected/472f1041-5c63-4f6d-997c-db8b89dfaacf-kube-api-access-kwqbh\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.048370 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.048381 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.048392 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.048402 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.048412 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472f1041-5c63-4f6d-997c-db8b89dfaacf-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.142916 4772 generic.go:334] "Generic (PLEG): container finished" podID="472f1041-5c63-4f6d-997c-db8b89dfaacf" containerID="8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8" exitCode=0 Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.143324 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.144001 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" event={"ID":"472f1041-5c63-4f6d-997c-db8b89dfaacf","Type":"ContainerDied","Data":"8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8"} Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.144066 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" event={"ID":"472f1041-5c63-4f6d-997c-db8b89dfaacf","Type":"ContainerDied","Data":"629e3cd2e684f8ad4cdf76a3cb0e532f849ad8d061174db0ac5268c6b97d3714"} Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.144110 4772 scope.go:117] "RemoveContainer" containerID="8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.175687 4772 scope.go:117] "RemoveContainer" containerID="2469a55ac671332b6c3c01bd8fec3decb81536081cb3c60ad4d7afa1f4341c8f" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.203504 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-vrxt8"] Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.216920 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-vrxt8"] Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.219079 4772 scope.go:117] "RemoveContainer" containerID="8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8" Nov 28 11:28:07 crc kubenswrapper[4772]: E1128 11:28:07.219813 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8\": container with ID starting with 8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8 not found: ID does not exist" containerID="8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.219875 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8"} err="failed to get container status \"8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8\": rpc error: code = NotFound desc = could not find container \"8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8\": container with ID starting with 8c8f80baece97d10094b75e96f7d719cc9925a7c840110c798ec4a2e70a82fc8 not found: ID does not exist" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.219910 4772 scope.go:117] "RemoveContainer" containerID="2469a55ac671332b6c3c01bd8fec3decb81536081cb3c60ad4d7afa1f4341c8f" Nov 28 11:28:07 crc kubenswrapper[4772]: E1128 11:28:07.220767 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2469a55ac671332b6c3c01bd8fec3decb81536081cb3c60ad4d7afa1f4341c8f\": container with ID starting with 2469a55ac671332b6c3c01bd8fec3decb81536081cb3c60ad4d7afa1f4341c8f not found: ID does not exist" containerID="2469a55ac671332b6c3c01bd8fec3decb81536081cb3c60ad4d7afa1f4341c8f" Nov 28 11:28:07 crc kubenswrapper[4772]: I1128 11:28:07.220811 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2469a55ac671332b6c3c01bd8fec3decb81536081cb3c60ad4d7afa1f4341c8f"} err="failed to get container status \"2469a55ac671332b6c3c01bd8fec3decb81536081cb3c60ad4d7afa1f4341c8f\": rpc error: code = NotFound desc = could not find container \"2469a55ac671332b6c3c01bd8fec3decb81536081cb3c60ad4d7afa1f4341c8f\": container with ID starting with 2469a55ac671332b6c3c01bd8fec3decb81536081cb3c60ad4d7afa1f4341c8f not found: ID does not exist" Nov 28 11:28:08 crc kubenswrapper[4772]: I1128 11:28:08.007245 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472f1041-5c63-4f6d-997c-db8b89dfaacf" path="/var/lib/kubelet/pods/472f1041-5c63-4f6d-997c-db8b89dfaacf/volumes" Nov 28 11:28:11 crc kubenswrapper[4772]: I1128 11:28:11.625817 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bccf8f775-vrxt8" podUID="472f1041-5c63-4f6d-997c-db8b89dfaacf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: i/o timeout" Nov 28 11:28:12 crc kubenswrapper[4772]: I1128 11:28:12.205682 4772 generic.go:334] "Generic (PLEG): container finished" podID="97b1f57a-1490-46e4-82db-913a03cf8750" containerID="40433f1908faf255c96bc8ebf48da837755e6ec15f2cb47eeb4573cb1ad41e51" exitCode=0 Nov 28 11:28:12 crc kubenswrapper[4772]: I1128 11:28:12.205803 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bj2mg" event={"ID":"97b1f57a-1490-46e4-82db-913a03cf8750","Type":"ContainerDied","Data":"40433f1908faf255c96bc8ebf48da837755e6ec15f2cb47eeb4573cb1ad41e51"} Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.342387 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.344028 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.357122 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.459063 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.459112 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.853574 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.936274 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-scripts\") pod \"97b1f57a-1490-46e4-82db-913a03cf8750\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.936488 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-config-data\") pod \"97b1f57a-1490-46e4-82db-913a03cf8750\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.937098 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp9tt\" (UniqueName: \"kubernetes.io/projected/97b1f57a-1490-46e4-82db-913a03cf8750-kube-api-access-gp9tt\") pod \"97b1f57a-1490-46e4-82db-913a03cf8750\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.937191 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-combined-ca-bundle\") pod \"97b1f57a-1490-46e4-82db-913a03cf8750\" (UID: \"97b1f57a-1490-46e4-82db-913a03cf8750\") " Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.945683 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b1f57a-1490-46e4-82db-913a03cf8750-kube-api-access-gp9tt" (OuterVolumeSpecName: "kube-api-access-gp9tt") pod "97b1f57a-1490-46e4-82db-913a03cf8750" (UID: "97b1f57a-1490-46e4-82db-913a03cf8750"). InnerVolumeSpecName "kube-api-access-gp9tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.964581 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-scripts" (OuterVolumeSpecName: "scripts") pod "97b1f57a-1490-46e4-82db-913a03cf8750" (UID: "97b1f57a-1490-46e4-82db-913a03cf8750"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.972997 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-config-data" (OuterVolumeSpecName: "config-data") pod "97b1f57a-1490-46e4-82db-913a03cf8750" (UID: "97b1f57a-1490-46e4-82db-913a03cf8750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:13 crc kubenswrapper[4772]: I1128 11:28:13.975871 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97b1f57a-1490-46e4-82db-913a03cf8750" (UID: "97b1f57a-1490-46e4-82db-913a03cf8750"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.040632 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.040683 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.040754 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp9tt\" (UniqueName: \"kubernetes.io/projected/97b1f57a-1490-46e4-82db-913a03cf8750-kube-api-access-gp9tt\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.040774 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b1f57a-1490-46e4-82db-913a03cf8750-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.230011 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bj2mg" Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.230070 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bj2mg" event={"ID":"97b1f57a-1490-46e4-82db-913a03cf8750","Type":"ContainerDied","Data":"5bf52b31468b0ec3e20bcb2c9ff76aa7021c42b55fbc7ad591c50ca25f82d492"} Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.230101 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf52b31468b0ec3e20bcb2c9ff76aa7021c42b55fbc7ad591c50ca25f82d492" Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.243020 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.515683 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.516128 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.542690 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.543005 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" containerName="nova-api-log" containerID="cri-o://ef06d9700d3aba66a3fd785268364accc4e676a19b00115f35298fd1915a9717" gracePeriod=30 Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.543867 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" containerName="nova-api-api" containerID="cri-o://aa04d3ef391ef8684dd7480dc67e0a71b442c014e4d6a7d02a90701358f42647" gracePeriod=30 Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.557223 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.557530 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0570449e-b49f-49ef-b8a4-bc7a55a8fe14" containerName="nova-scheduler-scheduler" containerID="cri-o://ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40" gracePeriod=30 Nov 28 11:28:14 crc kubenswrapper[4772]: I1128 11:28:14.618345 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:28:15 crc kubenswrapper[4772]: I1128 11:28:15.244734 4772 generic.go:334] "Generic (PLEG): container finished" podID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" containerID="ef06d9700d3aba66a3fd785268364accc4e676a19b00115f35298fd1915a9717" exitCode=143 Nov 28 11:28:15 crc kubenswrapper[4772]: I1128 11:28:15.244843 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500","Type":"ContainerDied","Data":"ef06d9700d3aba66a3fd785268364accc4e676a19b00115f35298fd1915a9717"} Nov 28 11:28:16 crc kubenswrapper[4772]: I1128 11:28:16.254600 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerName="nova-metadata-log" containerID="cri-o://9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe" gracePeriod=30 Nov 28 11:28:16 crc kubenswrapper[4772]: I1128 11:28:16.254782 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerName="nova-metadata-metadata" containerID="cri-o://c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af" gracePeriod=30 Nov 28 11:28:17 crc kubenswrapper[4772]: I1128 11:28:17.271724 4772 generic.go:334] "Generic (PLEG): container finished" podID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerID="9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe" exitCode=143 Nov 28 11:28:17 crc kubenswrapper[4772]: I1128 11:28:17.271804 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08a60d2c-4d5d-4fc0-895c-3368a0097268","Type":"ContainerDied","Data":"9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe"} Nov 28 11:28:18 crc kubenswrapper[4772]: I1128 11:28:18.979549 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.077032 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-combined-ca-bundle\") pod \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\" (UID: \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\") " Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.077294 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-config-data\") pod \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\" (UID: \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\") " Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.077651 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn2k7\" (UniqueName: \"kubernetes.io/projected/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-kube-api-access-tn2k7\") pod \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\" (UID: \"0570449e-b49f-49ef-b8a4-bc7a55a8fe14\") " Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.085545 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-kube-api-access-tn2k7" (OuterVolumeSpecName: "kube-api-access-tn2k7") pod "0570449e-b49f-49ef-b8a4-bc7a55a8fe14" (UID: "0570449e-b49f-49ef-b8a4-bc7a55a8fe14"). InnerVolumeSpecName "kube-api-access-tn2k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.117939 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0570449e-b49f-49ef-b8a4-bc7a55a8fe14" (UID: "0570449e-b49f-49ef-b8a4-bc7a55a8fe14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.131114 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-config-data" (OuterVolumeSpecName: "config-data") pod "0570449e-b49f-49ef-b8a4-bc7a55a8fe14" (UID: "0570449e-b49f-49ef-b8a4-bc7a55a8fe14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.180284 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn2k7\" (UniqueName: \"kubernetes.io/projected/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-kube-api-access-tn2k7\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.180325 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.180335 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0570449e-b49f-49ef-b8a4-bc7a55a8fe14-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.315181 4772 generic.go:334] "Generic (PLEG): container finished" podID="0570449e-b49f-49ef-b8a4-bc7a55a8fe14" containerID="ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40" exitCode=0 Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.315235 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0570449e-b49f-49ef-b8a4-bc7a55a8fe14","Type":"ContainerDied","Data":"ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40"} Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.315323 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0570449e-b49f-49ef-b8a4-bc7a55a8fe14","Type":"ContainerDied","Data":"63724dc8f4348b6c948c13c7458d6c330cf67c07979828e41ab67f07b462f333"} Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.315332 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.315377 4772 scope.go:117] "RemoveContainer" containerID="ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.350149 4772 scope.go:117] "RemoveContainer" containerID="ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40" Nov 28 11:28:19 crc kubenswrapper[4772]: E1128 11:28:19.351000 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40\": container with ID starting with ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40 not found: ID does not exist" containerID="ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.351047 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40"} err="failed to get container status \"ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40\": rpc error: code = NotFound desc = could not find container \"ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40\": container with ID starting with ca55e97962b98e80524c3e2265733cec3c142c281df9b5a3f885aa0783449d40 not found: ID does not exist" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.360844 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.383669 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.394011 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:28:19 crc kubenswrapper[4772]: E1128 11:28:19.394601 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b1f57a-1490-46e4-82db-913a03cf8750" containerName="nova-manage" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.394624 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b1f57a-1490-46e4-82db-913a03cf8750" containerName="nova-manage" Nov 28 11:28:19 crc kubenswrapper[4772]: E1128 11:28:19.394647 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0570449e-b49f-49ef-b8a4-bc7a55a8fe14" containerName="nova-scheduler-scheduler" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.394655 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0570449e-b49f-49ef-b8a4-bc7a55a8fe14" containerName="nova-scheduler-scheduler" Nov 28 11:28:19 crc kubenswrapper[4772]: E1128 11:28:19.394671 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472f1041-5c63-4f6d-997c-db8b89dfaacf" containerName="dnsmasq-dns" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.394677 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="472f1041-5c63-4f6d-997c-db8b89dfaacf" containerName="dnsmasq-dns" Nov 28 11:28:19 crc kubenswrapper[4772]: E1128 11:28:19.394700 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472f1041-5c63-4f6d-997c-db8b89dfaacf" containerName="init" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.394706 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="472f1041-5c63-4f6d-997c-db8b89dfaacf" containerName="init" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.394881 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0570449e-b49f-49ef-b8a4-bc7a55a8fe14" containerName="nova-scheduler-scheduler" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.394903 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="472f1041-5c63-4f6d-997c-db8b89dfaacf" containerName="dnsmasq-dns" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.394912 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b1f57a-1490-46e4-82db-913a03cf8750" containerName="nova-manage" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.395685 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.402054 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.405611 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:46708->10.217.0.196:8775: read: connection reset by peer" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.405604 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:46724->10.217.0.196:8775: read: connection reset by peer" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.413090 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.590805 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d68d6b7e-6515-4085-82db-aa7d361d06e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d68d6b7e-6515-4085-82db-aa7d361d06e6\") " pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.590927 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvcc7\" (UniqueName: \"kubernetes.io/projected/d68d6b7e-6515-4085-82db-aa7d361d06e6-kube-api-access-cvcc7\") pod \"nova-scheduler-0\" (UID: \"d68d6b7e-6515-4085-82db-aa7d361d06e6\") " pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.591338 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d68d6b7e-6515-4085-82db-aa7d361d06e6-config-data\") pod \"nova-scheduler-0\" (UID: \"d68d6b7e-6515-4085-82db-aa7d361d06e6\") " pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.693785 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d68d6b7e-6515-4085-82db-aa7d361d06e6-config-data\") pod \"nova-scheduler-0\" (UID: \"d68d6b7e-6515-4085-82db-aa7d361d06e6\") " pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.693968 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d68d6b7e-6515-4085-82db-aa7d361d06e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d68d6b7e-6515-4085-82db-aa7d361d06e6\") " pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.694056 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvcc7\" (UniqueName: \"kubernetes.io/projected/d68d6b7e-6515-4085-82db-aa7d361d06e6-kube-api-access-cvcc7\") pod \"nova-scheduler-0\" (UID: \"d68d6b7e-6515-4085-82db-aa7d361d06e6\") " pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.705732 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d68d6b7e-6515-4085-82db-aa7d361d06e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d68d6b7e-6515-4085-82db-aa7d361d06e6\") " pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.714011 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvcc7\" (UniqueName: \"kubernetes.io/projected/d68d6b7e-6515-4085-82db-aa7d361d06e6-kube-api-access-cvcc7\") pod \"nova-scheduler-0\" (UID: \"d68d6b7e-6515-4085-82db-aa7d361d06e6\") " pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.721211 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d68d6b7e-6515-4085-82db-aa7d361d06e6-config-data\") pod \"nova-scheduler-0\" (UID: \"d68d6b7e-6515-4085-82db-aa7d361d06e6\") " pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.773645 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.901749 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 11:28:19 crc kubenswrapper[4772]: I1128 11:28:19.999443 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-config-data\") pod \"08a60d2c-4d5d-4fc0-895c-3368a0097268\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:19.999570 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-combined-ca-bundle\") pod \"08a60d2c-4d5d-4fc0-895c-3368a0097268\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:19.999622 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a60d2c-4d5d-4fc0-895c-3368a0097268-logs\") pod \"08a60d2c-4d5d-4fc0-895c-3368a0097268\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:19.999724 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpdjs\" (UniqueName: \"kubernetes.io/projected/08a60d2c-4d5d-4fc0-895c-3368a0097268-kube-api-access-cpdjs\") pod \"08a60d2c-4d5d-4fc0-895c-3368a0097268\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:19.999750 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-nova-metadata-tls-certs\") pod \"08a60d2c-4d5d-4fc0-895c-3368a0097268\" (UID: \"08a60d2c-4d5d-4fc0-895c-3368a0097268\") " Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.000690 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08a60d2c-4d5d-4fc0-895c-3368a0097268-logs" (OuterVolumeSpecName: "logs") pod "08a60d2c-4d5d-4fc0-895c-3368a0097268" (UID: "08a60d2c-4d5d-4fc0-895c-3368a0097268"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.008152 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a60d2c-4d5d-4fc0-895c-3368a0097268-kube-api-access-cpdjs" (OuterVolumeSpecName: "kube-api-access-cpdjs") pod "08a60d2c-4d5d-4fc0-895c-3368a0097268" (UID: "08a60d2c-4d5d-4fc0-895c-3368a0097268"). InnerVolumeSpecName "kube-api-access-cpdjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.014849 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0570449e-b49f-49ef-b8a4-bc7a55a8fe14" path="/var/lib/kubelet/pods/0570449e-b49f-49ef-b8a4-bc7a55a8fe14/volumes" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.055254 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-config-data" (OuterVolumeSpecName: "config-data") pod "08a60d2c-4d5d-4fc0-895c-3368a0097268" (UID: "08a60d2c-4d5d-4fc0-895c-3368a0097268"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.061085 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08a60d2c-4d5d-4fc0-895c-3368a0097268" (UID: "08a60d2c-4d5d-4fc0-895c-3368a0097268"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.089810 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "08a60d2c-4d5d-4fc0-895c-3368a0097268" (UID: "08a60d2c-4d5d-4fc0-895c-3368a0097268"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.102389 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.102431 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.102444 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a60d2c-4d5d-4fc0-895c-3368a0097268-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.102453 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpdjs\" (UniqueName: \"kubernetes.io/projected/08a60d2c-4d5d-4fc0-895c-3368a0097268-kube-api-access-cpdjs\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.102464 4772 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a60d2c-4d5d-4fc0-895c-3368a0097268-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.375827 4772 generic.go:334] "Generic (PLEG): container finished" podID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerID="c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af" exitCode=0 Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.376093 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08a60d2c-4d5d-4fc0-895c-3368a0097268","Type":"ContainerDied","Data":"c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af"} Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.376143 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08a60d2c-4d5d-4fc0-895c-3368a0097268","Type":"ContainerDied","Data":"4a900b2ce398c0eb1ac24c6e8b4bb2f806d9e509977b7ddb24c50eed3a7a5ab3"} Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.376161 4772 scope.go:117] "RemoveContainer" containerID="c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.376272 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.416912 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.437666 4772 generic.go:334] "Generic (PLEG): container finished" podID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" containerID="aa04d3ef391ef8684dd7480dc67e0a71b442c014e4d6a7d02a90701358f42647" exitCode=0 Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.437736 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500","Type":"ContainerDied","Data":"aa04d3ef391ef8684dd7480dc67e0a71b442c014e4d6a7d02a90701358f42647"} Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.477347 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.488066 4772 scope.go:117] "RemoveContainer" containerID="9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.511420 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.537871 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.557407 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:28:20 crc kubenswrapper[4772]: E1128 11:28:20.557910 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerName="nova-metadata-log" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.557926 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerName="nova-metadata-log" Nov 28 11:28:20 crc kubenswrapper[4772]: E1128 11:28:20.557936 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerName="nova-metadata-metadata" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.557943 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerName="nova-metadata-metadata" Nov 28 11:28:20 crc kubenswrapper[4772]: E1128 11:28:20.557990 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" containerName="nova-api-log" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.557997 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" containerName="nova-api-log" Nov 28 11:28:20 crc kubenswrapper[4772]: E1128 11:28:20.558005 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" containerName="nova-api-api" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.558012 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" containerName="nova-api-api" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.558190 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" containerName="nova-api-api" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.558205 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerName="nova-metadata-log" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.558223 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" containerName="nova-api-log" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.558242 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" containerName="nova-metadata-metadata" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.559322 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.565059 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.565596 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.568480 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.592037 4772 scope.go:117] "RemoveContainer" containerID="c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af" Nov 28 11:28:20 crc kubenswrapper[4772]: E1128 11:28:20.598837 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af\": container with ID starting with c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af not found: ID does not exist" containerID="c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.598889 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af"} err="failed to get container status \"c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af\": rpc error: code = NotFound desc = could not find container \"c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af\": container with ID starting with c5bab7c28e48cd08a35741dbe8d0ceb54ae312591a2e49fd502f62fea54973af not found: ID does not exist" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.598917 4772 scope.go:117] "RemoveContainer" containerID="9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe" Nov 28 11:28:20 crc kubenswrapper[4772]: E1128 11:28:20.603085 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe\": container with ID starting with 9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe not found: ID does not exist" containerID="9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.603145 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe"} err="failed to get container status \"9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe\": rpc error: code = NotFound desc = could not find container \"9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe\": container with ID starting with 9e0580120a897d874a82efcf017ee9514326ff0cb12510bd3312fe567fdff4fe not found: ID does not exist" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.732691 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-combined-ca-bundle\") pod \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.733227 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7x2n\" (UniqueName: \"kubernetes.io/projected/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-kube-api-access-h7x2n\") pod \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.733286 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-public-tls-certs\") pod \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.733533 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-internal-tls-certs\") pod \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.733608 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-config-data\") pod \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.733698 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-logs\") pod \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\" (UID: \"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500\") " Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.734037 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36168857-1f1a-48f9-8adb-53889086486e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.734068 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36168857-1f1a-48f9-8adb-53889086486e-logs\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.734095 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36168857-1f1a-48f9-8adb-53889086486e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.734162 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36168857-1f1a-48f9-8adb-53889086486e-config-data\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.734213 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkm4t\" (UniqueName: \"kubernetes.io/projected/36168857-1f1a-48f9-8adb-53889086486e-kube-api-access-jkm4t\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.735235 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-logs" (OuterVolumeSpecName: "logs") pod "1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" (UID: "1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.737658 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-kube-api-access-h7x2n" (OuterVolumeSpecName: "kube-api-access-h7x2n") pod "1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" (UID: "1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500"). InnerVolumeSpecName "kube-api-access-h7x2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.779530 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" (UID: "1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.783779 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-config-data" (OuterVolumeSpecName: "config-data") pod "1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" (UID: "1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.810346 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" (UID: "1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.822981 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" (UID: "1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.836124 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36168857-1f1a-48f9-8adb-53889086486e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.836853 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36168857-1f1a-48f9-8adb-53889086486e-logs\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.837201 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36168857-1f1a-48f9-8adb-53889086486e-logs\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.837286 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36168857-1f1a-48f9-8adb-53889086486e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.846636 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36168857-1f1a-48f9-8adb-53889086486e-config-data\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.846873 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkm4t\" (UniqueName: \"kubernetes.io/projected/36168857-1f1a-48f9-8adb-53889086486e-kube-api-access-jkm4t\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.847208 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.847239 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.847251 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.847312 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-logs\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.847324 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.847342 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7x2n\" (UniqueName: \"kubernetes.io/projected/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500-kube-api-access-h7x2n\") on node \"crc\" DevicePath \"\"" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.850707 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36168857-1f1a-48f9-8adb-53889086486e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.862616 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36168857-1f1a-48f9-8adb-53889086486e-config-data\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.865985 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkm4t\" (UniqueName: \"kubernetes.io/projected/36168857-1f1a-48f9-8adb-53889086486e-kube-api-access-jkm4t\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:20 crc kubenswrapper[4772]: I1128 11:28:20.888283 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36168857-1f1a-48f9-8adb-53889086486e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36168857-1f1a-48f9-8adb-53889086486e\") " pod="openstack/nova-metadata-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.024026 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.456545 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.456807 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500","Type":"ContainerDied","Data":"8a0be4b40d5f72facb48e31b880a517a850c07929a168cb5bd3c0d26457a3fab"} Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.457409 4772 scope.go:117] "RemoveContainer" containerID="aa04d3ef391ef8684dd7480dc67e0a71b442c014e4d6a7d02a90701358f42647" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.461686 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d68d6b7e-6515-4085-82db-aa7d361d06e6","Type":"ContainerStarted","Data":"0b1534fd208e4b17dfc3bbbdfc4369408d0f46823a5efcbeeeb54e41cfd75d59"} Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.461735 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d68d6b7e-6515-4085-82db-aa7d361d06e6","Type":"ContainerStarted","Data":"7f3fdde9152f8826389141f34eddd74f2761cb59e1353e842fadac16dfd159bd"} Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.489678 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.489661532 podStartE2EDuration="2.489661532s" podCreationTimestamp="2025-11-28 11:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:28:21.487344862 +0000 UTC m=+1299.810588129" watchObservedRunningTime="2025-11-28 11:28:21.489661532 +0000 UTC m=+1299.812904759" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.519508 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.531759 4772 scope.go:117] "RemoveContainer" containerID="ef06d9700d3aba66a3fd785268364accc4e676a19b00115f35298fd1915a9717" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.539391 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.564714 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.585035 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.587967 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.597046 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.597459 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.597484 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.601995 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.766193 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cf07d5-2221-4a24-af67-730b13cd2021-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.766683 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggx6w\" (UniqueName: \"kubernetes.io/projected/04cf07d5-2221-4a24-af67-730b13cd2021-kube-api-access-ggx6w\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.766725 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cf07d5-2221-4a24-af67-730b13cd2021-config-data\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.766819 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cf07d5-2221-4a24-af67-730b13cd2021-logs\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.766893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cf07d5-2221-4a24-af67-730b13cd2021-internal-tls-certs\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.766962 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cf07d5-2221-4a24-af67-730b13cd2021-public-tls-certs\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.869169 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cf07d5-2221-4a24-af67-730b13cd2021-config-data\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.869335 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cf07d5-2221-4a24-af67-730b13cd2021-logs\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.869465 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cf07d5-2221-4a24-af67-730b13cd2021-internal-tls-certs\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.869537 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cf07d5-2221-4a24-af67-730b13cd2021-public-tls-certs\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.869573 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cf07d5-2221-4a24-af67-730b13cd2021-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.869639 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggx6w\" (UniqueName: \"kubernetes.io/projected/04cf07d5-2221-4a24-af67-730b13cd2021-kube-api-access-ggx6w\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.873934 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cf07d5-2221-4a24-af67-730b13cd2021-logs\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.907545 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cf07d5-2221-4a24-af67-730b13cd2021-internal-tls-certs\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.912182 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cf07d5-2221-4a24-af67-730b13cd2021-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.913816 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggx6w\" (UniqueName: \"kubernetes.io/projected/04cf07d5-2221-4a24-af67-730b13cd2021-kube-api-access-ggx6w\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.914457 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cf07d5-2221-4a24-af67-730b13cd2021-config-data\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.916291 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cf07d5-2221-4a24-af67-730b13cd2021-public-tls-certs\") pod \"nova-api-0\" (UID: \"04cf07d5-2221-4a24-af67-730b13cd2021\") " pod="openstack/nova-api-0" Nov 28 11:28:21 crc kubenswrapper[4772]: I1128 11:28:21.956725 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 11:28:22 crc kubenswrapper[4772]: I1128 11:28:22.021735 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a60d2c-4d5d-4fc0-895c-3368a0097268" path="/var/lib/kubelet/pods/08a60d2c-4d5d-4fc0-895c-3368a0097268/volumes" Nov 28 11:28:22 crc kubenswrapper[4772]: I1128 11:28:22.022474 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500" path="/var/lib/kubelet/pods/1bb1ad51-8f19-4a8e-b73c-e8be6d9cc500/volumes" Nov 28 11:28:22 crc kubenswrapper[4772]: I1128 11:28:22.478461 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 11:28:22 crc kubenswrapper[4772]: W1128 11:28:22.481564 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04cf07d5_2221_4a24_af67_730b13cd2021.slice/crio-f5d5d3726b102110a13aa9e1373aed2170a3f7458c7bcc0fb15e827e6827ae65 WatchSource:0}: Error finding container f5d5d3726b102110a13aa9e1373aed2170a3f7458c7bcc0fb15e827e6827ae65: Status 404 returned error can't find the container with id f5d5d3726b102110a13aa9e1373aed2170a3f7458c7bcc0fb15e827e6827ae65 Nov 28 11:28:22 crc kubenswrapper[4772]: I1128 11:28:22.499644 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04cf07d5-2221-4a24-af67-730b13cd2021","Type":"ContainerStarted","Data":"f5d5d3726b102110a13aa9e1373aed2170a3f7458c7bcc0fb15e827e6827ae65"} Nov 28 11:28:22 crc kubenswrapper[4772]: I1128 11:28:22.506762 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36168857-1f1a-48f9-8adb-53889086486e","Type":"ContainerStarted","Data":"0475e7f76bff597d54cd1c9ede518422f8643c4b508148d9bce09808b8d20325"} Nov 28 11:28:22 crc kubenswrapper[4772]: I1128 11:28:22.506840 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36168857-1f1a-48f9-8adb-53889086486e","Type":"ContainerStarted","Data":"0b3bec755ed58e882aff9af6c82b730cc8031f3ec8ef9d6d9fee02e472875d67"} Nov 28 11:28:22 crc kubenswrapper[4772]: I1128 11:28:22.506878 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36168857-1f1a-48f9-8adb-53889086486e","Type":"ContainerStarted","Data":"767b5f87dcd2cf3f525737a8ed7899c153a94ad34c02d637a85bb4113412364a"} Nov 28 11:28:22 crc kubenswrapper[4772]: I1128 11:28:22.538676 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.538645126 podStartE2EDuration="2.538645126s" podCreationTimestamp="2025-11-28 11:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:28:22.534190571 +0000 UTC m=+1300.857433818" watchObservedRunningTime="2025-11-28 11:28:22.538645126 +0000 UTC m=+1300.861888373" Nov 28 11:28:23 crc kubenswrapper[4772]: I1128 11:28:23.529659 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04cf07d5-2221-4a24-af67-730b13cd2021","Type":"ContainerStarted","Data":"57f1b96616fda2efc60ce2507ccc88406479b1d132ce1d13193a8335c7b64567"} Nov 28 11:28:23 crc kubenswrapper[4772]: I1128 11:28:23.530337 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04cf07d5-2221-4a24-af67-730b13cd2021","Type":"ContainerStarted","Data":"43b648d2949d428507d86c928a61bead3018d1c8bdefaa6f6e81985e415f9a89"} Nov 28 11:28:23 crc kubenswrapper[4772]: I1128 11:28:23.575573 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.575538926 podStartE2EDuration="2.575538926s" podCreationTimestamp="2025-11-28 11:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:28:23.56448195 +0000 UTC m=+1301.887725217" watchObservedRunningTime="2025-11-28 11:28:23.575538926 +0000 UTC m=+1301.898782193" Nov 28 11:28:24 crc kubenswrapper[4772]: I1128 11:28:24.774478 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 11:28:26 crc kubenswrapper[4772]: I1128 11:28:26.024916 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 11:28:26 crc kubenswrapper[4772]: I1128 11:28:26.024993 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 11:28:29 crc kubenswrapper[4772]: I1128 11:28:29.773785 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 11:28:29 crc kubenswrapper[4772]: I1128 11:28:29.823529 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 11:28:30 crc kubenswrapper[4772]: I1128 11:28:30.680203 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 11:28:30 crc kubenswrapper[4772]: I1128 11:28:30.720531 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 11:28:31 crc kubenswrapper[4772]: I1128 11:28:31.026664 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 11:28:31 crc kubenswrapper[4772]: I1128 11:28:31.026752 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 11:28:31 crc kubenswrapper[4772]: I1128 11:28:31.957816 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 11:28:31 crc kubenswrapper[4772]: I1128 11:28:31.958186 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 11:28:32 crc kubenswrapper[4772]: I1128 11:28:32.073726 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="36168857-1f1a-48f9-8adb-53889086486e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 11:28:32 crc kubenswrapper[4772]: I1128 11:28:32.074137 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="36168857-1f1a-48f9-8adb-53889086486e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 11:28:32 crc kubenswrapper[4772]: I1128 11:28:32.972559 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="04cf07d5-2221-4a24-af67-730b13cd2021" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 11:28:32 crc kubenswrapper[4772]: I1128 11:28:32.972691 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="04cf07d5-2221-4a24-af67-730b13cd2021" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 11:28:41 crc kubenswrapper[4772]: I1128 11:28:41.034355 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 11:28:41 crc kubenswrapper[4772]: I1128 11:28:41.036472 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 11:28:41 crc kubenswrapper[4772]: I1128 11:28:41.056947 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 11:28:41 crc kubenswrapper[4772]: I1128 11:28:41.776022 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 11:28:41 crc kubenswrapper[4772]: I1128 11:28:41.971538 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 11:28:41 crc kubenswrapper[4772]: I1128 11:28:41.972653 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 11:28:41 crc kubenswrapper[4772]: I1128 11:28:41.973003 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 11:28:41 crc kubenswrapper[4772]: I1128 11:28:41.984896 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 11:28:42 crc kubenswrapper[4772]: I1128 11:28:42.779009 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 11:28:42 crc kubenswrapper[4772]: I1128 11:28:42.785699 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 11:28:51 crc kubenswrapper[4772]: I1128 11:28:51.108249 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 11:28:52 crc kubenswrapper[4772]: I1128 11:28:52.279417 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 11:28:56 crc kubenswrapper[4772]: I1128 11:28:56.387719 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="4d684e5b-88f5-4004-a176-22ae480daaa6" containerName="rabbitmq" containerID="cri-o://e9087979407f97b11ae8526798f63b1fcdb093303aa3e09e4c1a3d05120aeca3" gracePeriod=604795 Nov 28 11:28:56 crc kubenswrapper[4772]: I1128 11:28:56.862549 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="52b2f98f-d36f-4798-9903-1f75498cdb5b" containerName="rabbitmq" containerID="cri-o://c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470" gracePeriod=604796 Nov 28 11:29:01 crc kubenswrapper[4772]: I1128 11:29:01.229433 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="52b2f98f-d36f-4798-9903-1f75498cdb5b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 28 11:29:01 crc kubenswrapper[4772]: I1128 11:29:01.759162 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="4d684e5b-88f5-4004-a176-22ae480daaa6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.041307 4772 generic.go:334] "Generic (PLEG): container finished" podID="4d684e5b-88f5-4004-a176-22ae480daaa6" containerID="e9087979407f97b11ae8526798f63b1fcdb093303aa3e09e4c1a3d05120aeca3" exitCode=0 Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.041391 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4d684e5b-88f5-4004-a176-22ae480daaa6","Type":"ContainerDied","Data":"e9087979407f97b11ae8526798f63b1fcdb093303aa3e09e4c1a3d05120aeca3"} Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.042202 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4d684e5b-88f5-4004-a176-22ae480daaa6","Type":"ContainerDied","Data":"1b094406ebe0ce115e1533d81314e7786d1ca1cfeb129ab2d8ea32bcb3ccda82"} Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.042222 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b094406ebe0ce115e1533d81314e7786d1ca1cfeb129ab2d8ea32bcb3ccda82" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.073652 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.215413 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r8n2\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-kube-api-access-8r8n2\") pod \"4d684e5b-88f5-4004-a176-22ae480daaa6\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.215573 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-plugins\") pod \"4d684e5b-88f5-4004-a176-22ae480daaa6\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.215678 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"4d684e5b-88f5-4004-a176-22ae480daaa6\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.215711 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d684e5b-88f5-4004-a176-22ae480daaa6-pod-info\") pod \"4d684e5b-88f5-4004-a176-22ae480daaa6\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.215771 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-confd\") pod \"4d684e5b-88f5-4004-a176-22ae480daaa6\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.215806 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-plugins-conf\") pod \"4d684e5b-88f5-4004-a176-22ae480daaa6\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.215882 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-tls\") pod \"4d684e5b-88f5-4004-a176-22ae480daaa6\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.215935 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-server-conf\") pod \"4d684e5b-88f5-4004-a176-22ae480daaa6\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.215968 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-erlang-cookie\") pod \"4d684e5b-88f5-4004-a176-22ae480daaa6\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.216000 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d684e5b-88f5-4004-a176-22ae480daaa6-erlang-cookie-secret\") pod \"4d684e5b-88f5-4004-a176-22ae480daaa6\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.216018 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-config-data\") pod \"4d684e5b-88f5-4004-a176-22ae480daaa6\" (UID: \"4d684e5b-88f5-4004-a176-22ae480daaa6\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.216136 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4d684e5b-88f5-4004-a176-22ae480daaa6" (UID: "4d684e5b-88f5-4004-a176-22ae480daaa6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.216477 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.217378 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4d684e5b-88f5-4004-a176-22ae480daaa6" (UID: "4d684e5b-88f5-4004-a176-22ae480daaa6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.218997 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4d684e5b-88f5-4004-a176-22ae480daaa6" (UID: "4d684e5b-88f5-4004-a176-22ae480daaa6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.226028 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4d684e5b-88f5-4004-a176-22ae480daaa6-pod-info" (OuterVolumeSpecName: "pod-info") pod "4d684e5b-88f5-4004-a176-22ae480daaa6" (UID: "4d684e5b-88f5-4004-a176-22ae480daaa6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.226116 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4d684e5b-88f5-4004-a176-22ae480daaa6" (UID: "4d684e5b-88f5-4004-a176-22ae480daaa6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.230431 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d684e5b-88f5-4004-a176-22ae480daaa6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4d684e5b-88f5-4004-a176-22ae480daaa6" (UID: "4d684e5b-88f5-4004-a176-22ae480daaa6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.237022 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-kube-api-access-8r8n2" (OuterVolumeSpecName: "kube-api-access-8r8n2") pod "4d684e5b-88f5-4004-a176-22ae480daaa6" (UID: "4d684e5b-88f5-4004-a176-22ae480daaa6"). InnerVolumeSpecName "kube-api-access-8r8n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.239621 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "4d684e5b-88f5-4004-a176-22ae480daaa6" (UID: "4d684e5b-88f5-4004-a176-22ae480daaa6"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.275276 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-config-data" (OuterVolumeSpecName: "config-data") pod "4d684e5b-88f5-4004-a176-22ae480daaa6" (UID: "4d684e5b-88f5-4004-a176-22ae480daaa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.319129 4772 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.319439 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.319565 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.319671 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.319763 4772 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d684e5b-88f5-4004-a176-22ae480daaa6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.319841 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r8n2\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-kube-api-access-8r8n2\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.319942 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.320215 4772 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d684e5b-88f5-4004-a176-22ae480daaa6-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.326777 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-server-conf" (OuterVolumeSpecName: "server-conf") pod "4d684e5b-88f5-4004-a176-22ae480daaa6" (UID: "4d684e5b-88f5-4004-a176-22ae480daaa6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.344055 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.381641 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4d684e5b-88f5-4004-a176-22ae480daaa6" (UID: "4d684e5b-88f5-4004-a176-22ae480daaa6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.422913 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.422955 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d684e5b-88f5-4004-a176-22ae480daaa6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.422971 4772 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d684e5b-88f5-4004-a176-22ae480daaa6-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.470864 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.625920 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-plugins\") pod \"52b2f98f-d36f-4798-9903-1f75498cdb5b\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.625982 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"52b2f98f-d36f-4798-9903-1f75498cdb5b\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.626033 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52b2f98f-d36f-4798-9903-1f75498cdb5b-pod-info\") pod \"52b2f98f-d36f-4798-9903-1f75498cdb5b\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.626093 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-server-conf\") pod \"52b2f98f-d36f-4798-9903-1f75498cdb5b\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.626168 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-plugins-conf\") pod \"52b2f98f-d36f-4798-9903-1f75498cdb5b\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.626592 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "52b2f98f-d36f-4798-9903-1f75498cdb5b" (UID: "52b2f98f-d36f-4798-9903-1f75498cdb5b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.626740 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52b2f98f-d36f-4798-9903-1f75498cdb5b-erlang-cookie-secret\") pod \"52b2f98f-d36f-4798-9903-1f75498cdb5b\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.626847 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95v92\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-kube-api-access-95v92\") pod \"52b2f98f-d36f-4798-9903-1f75498cdb5b\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.626878 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-config-data\") pod \"52b2f98f-d36f-4798-9903-1f75498cdb5b\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.626908 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-erlang-cookie\") pod \"52b2f98f-d36f-4798-9903-1f75498cdb5b\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.626989 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-tls\") pod \"52b2f98f-d36f-4798-9903-1f75498cdb5b\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.627028 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "52b2f98f-d36f-4798-9903-1f75498cdb5b" (UID: "52b2f98f-d36f-4798-9903-1f75498cdb5b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.627046 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-confd\") pod \"52b2f98f-d36f-4798-9903-1f75498cdb5b\" (UID: \"52b2f98f-d36f-4798-9903-1f75498cdb5b\") " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.627600 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.627642 4772 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.628903 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "52b2f98f-d36f-4798-9903-1f75498cdb5b" (UID: "52b2f98f-d36f-4798-9903-1f75498cdb5b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.632544 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b2f98f-d36f-4798-9903-1f75498cdb5b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "52b2f98f-d36f-4798-9903-1f75498cdb5b" (UID: "52b2f98f-d36f-4798-9903-1f75498cdb5b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.632823 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "52b2f98f-d36f-4798-9903-1f75498cdb5b" (UID: "52b2f98f-d36f-4798-9903-1f75498cdb5b"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.633375 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "52b2f98f-d36f-4798-9903-1f75498cdb5b" (UID: "52b2f98f-d36f-4798-9903-1f75498cdb5b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.635503 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/52b2f98f-d36f-4798-9903-1f75498cdb5b-pod-info" (OuterVolumeSpecName: "pod-info") pod "52b2f98f-d36f-4798-9903-1f75498cdb5b" (UID: "52b2f98f-d36f-4798-9903-1f75498cdb5b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.638563 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-kube-api-access-95v92" (OuterVolumeSpecName: "kube-api-access-95v92") pod "52b2f98f-d36f-4798-9903-1f75498cdb5b" (UID: "52b2f98f-d36f-4798-9903-1f75498cdb5b"). InnerVolumeSpecName "kube-api-access-95v92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.675164 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-config-data" (OuterVolumeSpecName: "config-data") pod "52b2f98f-d36f-4798-9903-1f75498cdb5b" (UID: "52b2f98f-d36f-4798-9903-1f75498cdb5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.702890 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-server-conf" (OuterVolumeSpecName: "server-conf") pod "52b2f98f-d36f-4798-9903-1f75498cdb5b" (UID: "52b2f98f-d36f-4798-9903-1f75498cdb5b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.730496 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95v92\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-kube-api-access-95v92\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.730529 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.730543 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.730551 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.730597 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.730605 4772 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52b2f98f-d36f-4798-9903-1f75498cdb5b-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.730615 4772 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52b2f98f-d36f-4798-9903-1f75498cdb5b-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.730624 4772 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52b2f98f-d36f-4798-9903-1f75498cdb5b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.756714 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.778771 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "52b2f98f-d36f-4798-9903-1f75498cdb5b" (UID: "52b2f98f-d36f-4798-9903-1f75498cdb5b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.832020 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52b2f98f-d36f-4798-9903-1f75498cdb5b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:03 crc kubenswrapper[4772]: I1128 11:29:03.832063 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.056001 4772 generic.go:334] "Generic (PLEG): container finished" podID="52b2f98f-d36f-4798-9903-1f75498cdb5b" containerID="c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470" exitCode=0 Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.056064 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52b2f98f-d36f-4798-9903-1f75498cdb5b","Type":"ContainerDied","Data":"c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470"} Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.056085 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.056109 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52b2f98f-d36f-4798-9903-1f75498cdb5b","Type":"ContainerDied","Data":"c9bb787459e31b94e90e0d29c567b5042db7e462d37f4ef8565dbf001daba98c"} Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.056131 4772 scope.go:117] "RemoveContainer" containerID="c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.056149 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.097956 4772 scope.go:117] "RemoveContainer" containerID="820b521e8723c89ca4b5ab79f599eaa92d05cdf27d3674e74608ff1f326ac44e" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.100860 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.132002 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.143390 4772 scope.go:117] "RemoveContainer" containerID="c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.146170 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 11:29:04 crc kubenswrapper[4772]: E1128 11:29:04.147148 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470\": container with ID starting with c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470 not found: ID does not exist" containerID="c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.147197 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470"} err="failed to get container status \"c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470\": rpc error: code = NotFound desc = could not find container \"c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470\": container with ID starting with c1b5bc0d606b53b1f923962e6320bc50a7f3c41eaa28d4ea3325015f9d8f2470 not found: ID does not exist" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.147234 4772 scope.go:117] "RemoveContainer" containerID="820b521e8723c89ca4b5ab79f599eaa92d05cdf27d3674e74608ff1f326ac44e" Nov 28 11:29:04 crc kubenswrapper[4772]: E1128 11:29:04.148386 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"820b521e8723c89ca4b5ab79f599eaa92d05cdf27d3674e74608ff1f326ac44e\": container with ID starting with 820b521e8723c89ca4b5ab79f599eaa92d05cdf27d3674e74608ff1f326ac44e not found: ID does not exist" containerID="820b521e8723c89ca4b5ab79f599eaa92d05cdf27d3674e74608ff1f326ac44e" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.148421 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"820b521e8723c89ca4b5ab79f599eaa92d05cdf27d3674e74608ff1f326ac44e"} err="failed to get container status \"820b521e8723c89ca4b5ab79f599eaa92d05cdf27d3674e74608ff1f326ac44e\": rpc error: code = NotFound desc = could not find container \"820b521e8723c89ca4b5ab79f599eaa92d05cdf27d3674e74608ff1f326ac44e\": container with ID starting with 820b521e8723c89ca4b5ab79f599eaa92d05cdf27d3674e74608ff1f326ac44e not found: ID does not exist" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.166733 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.182208 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 11:29:04 crc kubenswrapper[4772]: E1128 11:29:04.183064 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b2f98f-d36f-4798-9903-1f75498cdb5b" containerName="setup-container" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.183080 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b2f98f-d36f-4798-9903-1f75498cdb5b" containerName="setup-container" Nov 28 11:29:04 crc kubenswrapper[4772]: E1128 11:29:04.183104 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d684e5b-88f5-4004-a176-22ae480daaa6" containerName="rabbitmq" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.183111 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d684e5b-88f5-4004-a176-22ae480daaa6" containerName="rabbitmq" Nov 28 11:29:04 crc kubenswrapper[4772]: E1128 11:29:04.183124 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b2f98f-d36f-4798-9903-1f75498cdb5b" containerName="rabbitmq" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.183134 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b2f98f-d36f-4798-9903-1f75498cdb5b" containerName="rabbitmq" Nov 28 11:29:04 crc kubenswrapper[4772]: E1128 11:29:04.183152 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d684e5b-88f5-4004-a176-22ae480daaa6" containerName="setup-container" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.183157 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d684e5b-88f5-4004-a176-22ae480daaa6" containerName="setup-container" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.183326 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b2f98f-d36f-4798-9903-1f75498cdb5b" containerName="rabbitmq" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.183375 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d684e5b-88f5-4004-a176-22ae480daaa6" containerName="rabbitmq" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.184662 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.188113 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mcv57" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.188286 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.188445 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.188500 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.188581 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.188633 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.188782 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.204998 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.221285 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.225083 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.225440 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.225567 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.225732 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.225852 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.226188 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.226496 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2lkn4" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.234141 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.246405 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e3b8854-8b5b-441d-97a7-12e48cffafb6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.246526 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e3b8854-8b5b-441d-97a7-12e48cffafb6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.246629 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e3b8854-8b5b-441d-97a7-12e48cffafb6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.246784 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e3b8854-8b5b-441d-97a7-12e48cffafb6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.246852 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e3b8854-8b5b-441d-97a7-12e48cffafb6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.246917 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e3b8854-8b5b-441d-97a7-12e48cffafb6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.246970 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e3b8854-8b5b-441d-97a7-12e48cffafb6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.247068 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.247139 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e3b8854-8b5b-441d-97a7-12e48cffafb6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.247171 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdnq2\" (UniqueName: \"kubernetes.io/projected/9e3b8854-8b5b-441d-97a7-12e48cffafb6-kube-api-access-bdnq2\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.247200 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e3b8854-8b5b-441d-97a7-12e48cffafb6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.268550 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.349928 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.350067 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.350135 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e3b8854-8b5b-441d-97a7-12e48cffafb6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.350184 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e3b8854-8b5b-441d-97a7-12e48cffafb6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.350311 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q98nr\" (UniqueName: \"kubernetes.io/projected/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-kube-api-access-q98nr\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.350486 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e3b8854-8b5b-441d-97a7-12e48cffafb6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.350517 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-config-data\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.351181 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.352016 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e3b8854-8b5b-441d-97a7-12e48cffafb6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.352545 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e3b8854-8b5b-441d-97a7-12e48cffafb6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.353015 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.353499 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.353607 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e3b8854-8b5b-441d-97a7-12e48cffafb6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.353681 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdnq2\" (UniqueName: \"kubernetes.io/projected/9e3b8854-8b5b-441d-97a7-12e48cffafb6-kube-api-access-bdnq2\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.353722 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e3b8854-8b5b-441d-97a7-12e48cffafb6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.353759 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.353787 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e3b8854-8b5b-441d-97a7-12e48cffafb6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.353843 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e3b8854-8b5b-441d-97a7-12e48cffafb6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.353873 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e3b8854-8b5b-441d-97a7-12e48cffafb6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.353935 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.353977 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.354011 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.354057 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e3b8854-8b5b-441d-97a7-12e48cffafb6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.354110 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.353344 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.354618 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e3b8854-8b5b-441d-97a7-12e48cffafb6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.355423 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e3b8854-8b5b-441d-97a7-12e48cffafb6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.356405 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e3b8854-8b5b-441d-97a7-12e48cffafb6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.357147 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e3b8854-8b5b-441d-97a7-12e48cffafb6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.361897 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e3b8854-8b5b-441d-97a7-12e48cffafb6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.362815 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e3b8854-8b5b-441d-97a7-12e48cffafb6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.364485 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e3b8854-8b5b-441d-97a7-12e48cffafb6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.375852 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdnq2\" (UniqueName: \"kubernetes.io/projected/9e3b8854-8b5b-441d-97a7-12e48cffafb6-kube-api-access-bdnq2\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.393250 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e3b8854-8b5b-441d-97a7-12e48cffafb6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.456239 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.456349 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.456422 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.456454 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.456477 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.456521 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.456553 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.456588 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.456662 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q98nr\" (UniqueName: \"kubernetes.io/projected/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-kube-api-access-q98nr\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.456693 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-config-data\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.456735 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.459273 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.459369 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.461914 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.462416 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.464595 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.465208 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-config-data\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.467090 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.467845 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.468037 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.468583 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.482877 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q98nr\" (UniqueName: \"kubernetes.io/projected/1a8859cb-c89a-4d2c-ac6b-6abd31388e61-kube-api-access-q98nr\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.501922 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"1a8859cb-c89a-4d2c-ac6b-6abd31388e61\") " pod="openstack/rabbitmq-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.519149 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:04 crc kubenswrapper[4772]: I1128 11:29:04.569187 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.088227 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.580155 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-mz2ct"] Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.582642 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.585831 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.625466 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-mz2ct"] Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.695456 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.695509 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-dns-svc\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.695623 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-config\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.695658 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.695686 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.695708 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.695734 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56kx\" (UniqueName: \"kubernetes.io/projected/e79df57a-2ba8-4921-ab33-6911ab2e0573-kube-api-access-d56kx\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.797481 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.797542 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.797579 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56kx\" (UniqueName: \"kubernetes.io/projected/e79df57a-2ba8-4921-ab33-6911ab2e0573-kube-api-access-d56kx\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.797612 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.797638 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-dns-svc\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.797751 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-config\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.797786 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.798805 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.798829 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-dns-svc\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.798866 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.798998 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.799011 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-config\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.800092 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.818839 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56kx\" (UniqueName: \"kubernetes.io/projected/e79df57a-2ba8-4921-ab33-6911ab2e0573-kube-api-access-d56kx\") pod \"dnsmasq-dns-d558885bc-mz2ct\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.893935 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 11:29:05 crc kubenswrapper[4772]: W1128 11:29:05.905461 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a8859cb_c89a_4d2c_ac6b_6abd31388e61.slice/crio-33c7d9ff285ce0a3540d3f55deff52af368e22c08f1eb7cabcad791770b380e3 WatchSource:0}: Error finding container 33c7d9ff285ce0a3540d3f55deff52af368e22c08f1eb7cabcad791770b380e3: Status 404 returned error can't find the container with id 33c7d9ff285ce0a3540d3f55deff52af368e22c08f1eb7cabcad791770b380e3 Nov 28 11:29:05 crc kubenswrapper[4772]: I1128 11:29:05.975952 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:06 crc kubenswrapper[4772]: I1128 11:29:06.008941 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d684e5b-88f5-4004-a176-22ae480daaa6" path="/var/lib/kubelet/pods/4d684e5b-88f5-4004-a176-22ae480daaa6/volumes" Nov 28 11:29:06 crc kubenswrapper[4772]: I1128 11:29:06.010374 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b2f98f-d36f-4798-9903-1f75498cdb5b" path="/var/lib/kubelet/pods/52b2f98f-d36f-4798-9903-1f75498cdb5b/volumes" Nov 28 11:29:06 crc kubenswrapper[4772]: I1128 11:29:06.112663 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1a8859cb-c89a-4d2c-ac6b-6abd31388e61","Type":"ContainerStarted","Data":"33c7d9ff285ce0a3540d3f55deff52af368e22c08f1eb7cabcad791770b380e3"} Nov 28 11:29:06 crc kubenswrapper[4772]: I1128 11:29:06.115212 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e3b8854-8b5b-441d-97a7-12e48cffafb6","Type":"ContainerStarted","Data":"afc8b56c9270c2bbcc3df3fe5f371256c495f05514f42c01e76063ed353b3842"} Nov 28 11:29:06 crc kubenswrapper[4772]: I1128 11:29:06.442879 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-mz2ct"] Nov 28 11:29:06 crc kubenswrapper[4772]: W1128 11:29:06.643995 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode79df57a_2ba8_4921_ab33_6911ab2e0573.slice/crio-eabda098e46df4f77741e365653bb2f6188cfd455f78202d1556f07afd9bcbe9 WatchSource:0}: Error finding container eabda098e46df4f77741e365653bb2f6188cfd455f78202d1556f07afd9bcbe9: Status 404 returned error can't find the container with id eabda098e46df4f77741e365653bb2f6188cfd455f78202d1556f07afd9bcbe9 Nov 28 11:29:07 crc kubenswrapper[4772]: I1128 11:29:07.136427 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e3b8854-8b5b-441d-97a7-12e48cffafb6","Type":"ContainerStarted","Data":"7fafde5303fd757470cd878782837126b550842e87d7292e2d8a2a1463c0e7e7"} Nov 28 11:29:07 crc kubenswrapper[4772]: I1128 11:29:07.138676 4772 generic.go:334] "Generic (PLEG): container finished" podID="e79df57a-2ba8-4921-ab33-6911ab2e0573" containerID="03c18c2898efd313c371880fb4262f0dc3147bcff1d96e78c7e43f22f7a2bc51" exitCode=0 Nov 28 11:29:07 crc kubenswrapper[4772]: I1128 11:29:07.138736 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" event={"ID":"e79df57a-2ba8-4921-ab33-6911ab2e0573","Type":"ContainerDied","Data":"03c18c2898efd313c371880fb4262f0dc3147bcff1d96e78c7e43f22f7a2bc51"} Nov 28 11:29:07 crc kubenswrapper[4772]: I1128 11:29:07.138772 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" event={"ID":"e79df57a-2ba8-4921-ab33-6911ab2e0573","Type":"ContainerStarted","Data":"eabda098e46df4f77741e365653bb2f6188cfd455f78202d1556f07afd9bcbe9"} Nov 28 11:29:08 crc kubenswrapper[4772]: I1128 11:29:08.150807 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" event={"ID":"e79df57a-2ba8-4921-ab33-6911ab2e0573","Type":"ContainerStarted","Data":"58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9"} Nov 28 11:29:08 crc kubenswrapper[4772]: I1128 11:29:08.151430 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:08 crc kubenswrapper[4772]: I1128 11:29:08.153408 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1a8859cb-c89a-4d2c-ac6b-6abd31388e61","Type":"ContainerStarted","Data":"214d95175dc9ab7386db1fd1995bbfc9d3905385441ff6989f70bbf7a3a03ef0"} Nov 28 11:29:08 crc kubenswrapper[4772]: I1128 11:29:08.180545 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" podStartSLOduration=3.18051929 podStartE2EDuration="3.18051929s" podCreationTimestamp="2025-11-28 11:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:29:08.172720158 +0000 UTC m=+1346.495963385" watchObservedRunningTime="2025-11-28 11:29:08.18051929 +0000 UTC m=+1346.503762517" Nov 28 11:29:15 crc kubenswrapper[4772]: I1128 11:29:15.978633 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.051260 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-sqvg4"] Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.051594 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" podUID="16d9e80f-25f4-4cc3-b8f7-442760eff45c" containerName="dnsmasq-dns" containerID="cri-o://fb5d9a0b7691beb8cfdb4d3b578fa211b68d76395094beff985b11d9b4b6a81c" gracePeriod=10 Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.181492 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" podUID="16d9e80f-25f4-4cc3-b8f7-442760eff45c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.197:5353: connect: connection refused" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.238276 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-shv6k"] Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.240694 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.248576 4772 generic.go:334] "Generic (PLEG): container finished" podID="16d9e80f-25f4-4cc3-b8f7-442760eff45c" containerID="fb5d9a0b7691beb8cfdb4d3b578fa211b68d76395094beff985b11d9b4b6a81c" exitCode=0 Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.248642 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" event={"ID":"16d9e80f-25f4-4cc3-b8f7-442760eff45c","Type":"ContainerDied","Data":"fb5d9a0b7691beb8cfdb4d3b578fa211b68d76395094beff985b11d9b4b6a81c"} Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.264584 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-shv6k"] Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.331209 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.331269 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.331290 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.331309 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.331327 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szgkp\" (UniqueName: \"kubernetes.io/projected/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-kube-api-access-szgkp\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.331594 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.332014 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-config\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.434308 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.434407 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.434445 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.434473 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.434496 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szgkp\" (UniqueName: \"kubernetes.io/projected/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-kube-api-access-szgkp\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.434525 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.434632 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-config\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.435694 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.435778 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.436492 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-config\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.436532 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.437254 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.437741 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.481300 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szgkp\" (UniqueName: \"kubernetes.io/projected/03f3a9a0-ccb5-41d6-8ba8-419fb9775213-kube-api-access-szgkp\") pod \"dnsmasq-dns-78c64bc9c5-shv6k\" (UID: \"03f3a9a0-ccb5-41d6-8ba8-419fb9775213\") " pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.577421 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.710986 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.844826 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-dns-swift-storage-0\") pod \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.844942 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-ovsdbserver-sb\") pod \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.845039 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqttr\" (UniqueName: \"kubernetes.io/projected/16d9e80f-25f4-4cc3-b8f7-442760eff45c-kube-api-access-cqttr\") pod \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.845229 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-ovsdbserver-nb\") pod \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.845317 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-config\") pod \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.845346 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-dns-svc\") pod \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\" (UID: \"16d9e80f-25f4-4cc3-b8f7-442760eff45c\") " Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.858073 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16d9e80f-25f4-4cc3-b8f7-442760eff45c-kube-api-access-cqttr" (OuterVolumeSpecName: "kube-api-access-cqttr") pod "16d9e80f-25f4-4cc3-b8f7-442760eff45c" (UID: "16d9e80f-25f4-4cc3-b8f7-442760eff45c"). InnerVolumeSpecName "kube-api-access-cqttr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.896730 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16d9e80f-25f4-4cc3-b8f7-442760eff45c" (UID: "16d9e80f-25f4-4cc3-b8f7-442760eff45c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.899043 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "16d9e80f-25f4-4cc3-b8f7-442760eff45c" (UID: "16d9e80f-25f4-4cc3-b8f7-442760eff45c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.901508 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16d9e80f-25f4-4cc3-b8f7-442760eff45c" (UID: "16d9e80f-25f4-4cc3-b8f7-442760eff45c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.913224 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-config" (OuterVolumeSpecName: "config") pod "16d9e80f-25f4-4cc3-b8f7-442760eff45c" (UID: "16d9e80f-25f4-4cc3-b8f7-442760eff45c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.914559 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16d9e80f-25f4-4cc3-b8f7-442760eff45c" (UID: "16d9e80f-25f4-4cc3-b8f7-442760eff45c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.950552 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.950590 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.950601 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.950609 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.950622 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16d9e80f-25f4-4cc3-b8f7-442760eff45c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:16 crc kubenswrapper[4772]: I1128 11:29:16.950630 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqttr\" (UniqueName: \"kubernetes.io/projected/16d9e80f-25f4-4cc3-b8f7-442760eff45c-kube-api-access-cqttr\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:17 crc kubenswrapper[4772]: I1128 11:29:17.064875 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-shv6k"] Nov 28 11:29:17 crc kubenswrapper[4772]: I1128 11:29:17.261951 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" event={"ID":"16d9e80f-25f4-4cc3-b8f7-442760eff45c","Type":"ContainerDied","Data":"50db7dc54298c84c7c127a4c0649fcfe83c030e3fd15d5c4f6935544d36a830f"} Nov 28 11:29:17 crc kubenswrapper[4772]: I1128 11:29:17.262450 4772 scope.go:117] "RemoveContainer" containerID="fb5d9a0b7691beb8cfdb4d3b578fa211b68d76395094beff985b11d9b4b6a81c" Nov 28 11:29:17 crc kubenswrapper[4772]: I1128 11:29:17.262042 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-sqvg4" Nov 28 11:29:17 crc kubenswrapper[4772]: I1128 11:29:17.264628 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" event={"ID":"03f3a9a0-ccb5-41d6-8ba8-419fb9775213","Type":"ContainerStarted","Data":"9a59abf97142ea55182e1ffde2349fbdfd76e9e01b1c59d526d90098627a0f01"} Nov 28 11:29:17 crc kubenswrapper[4772]: I1128 11:29:17.297727 4772 scope.go:117] "RemoveContainer" containerID="14b44dbe71b78f8f96e38a2d77be5e7be6d70989b2bedea7a90be1fb42e361f8" Nov 28 11:29:17 crc kubenswrapper[4772]: I1128 11:29:17.300783 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-sqvg4"] Nov 28 11:29:17 crc kubenswrapper[4772]: I1128 11:29:17.310460 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-sqvg4"] Nov 28 11:29:18 crc kubenswrapper[4772]: I1128 11:29:18.010200 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16d9e80f-25f4-4cc3-b8f7-442760eff45c" path="/var/lib/kubelet/pods/16d9e80f-25f4-4cc3-b8f7-442760eff45c/volumes" Nov 28 11:29:18 crc kubenswrapper[4772]: I1128 11:29:18.277797 4772 generic.go:334] "Generic (PLEG): container finished" podID="03f3a9a0-ccb5-41d6-8ba8-419fb9775213" containerID="c472f80061f07fe97a8096579fd8735e031457c3ae1945067d455354a74366f7" exitCode=0 Nov 28 11:29:18 crc kubenswrapper[4772]: I1128 11:29:18.277899 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" event={"ID":"03f3a9a0-ccb5-41d6-8ba8-419fb9775213","Type":"ContainerDied","Data":"c472f80061f07fe97a8096579fd8735e031457c3ae1945067d455354a74366f7"} Nov 28 11:29:19 crc kubenswrapper[4772]: I1128 11:29:19.301740 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" event={"ID":"03f3a9a0-ccb5-41d6-8ba8-419fb9775213","Type":"ContainerStarted","Data":"c6bb06b9cb65475aac9eea3938b9ff2bac7cb9145499e75879c035f6caba1228"} Nov 28 11:29:19 crc kubenswrapper[4772]: I1128 11:29:19.303036 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:19 crc kubenswrapper[4772]: I1128 11:29:19.345055 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" podStartSLOduration=3.345019174 podStartE2EDuration="3.345019174s" podCreationTimestamp="2025-11-28 11:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:29:19.334095012 +0000 UTC m=+1357.657338239" watchObservedRunningTime="2025-11-28 11:29:19.345019174 +0000 UTC m=+1357.668262441" Nov 28 11:29:26 crc kubenswrapper[4772]: I1128 11:29:26.579626 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-shv6k" Nov 28 11:29:26 crc kubenswrapper[4772]: I1128 11:29:26.672754 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-mz2ct"] Nov 28 11:29:26 crc kubenswrapper[4772]: I1128 11:29:26.673099 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" podUID="e79df57a-2ba8-4921-ab33-6911ab2e0573" containerName="dnsmasq-dns" containerID="cri-o://58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9" gracePeriod=10 Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.217871 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.295074 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-dns-svc\") pod \"e79df57a-2ba8-4921-ab33-6911ab2e0573\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.295174 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-openstack-edpm-ipam\") pod \"e79df57a-2ba8-4921-ab33-6911ab2e0573\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.295210 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-dns-swift-storage-0\") pod \"e79df57a-2ba8-4921-ab33-6911ab2e0573\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.295231 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-config\") pod \"e79df57a-2ba8-4921-ab33-6911ab2e0573\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.295335 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d56kx\" (UniqueName: \"kubernetes.io/projected/e79df57a-2ba8-4921-ab33-6911ab2e0573-kube-api-access-d56kx\") pod \"e79df57a-2ba8-4921-ab33-6911ab2e0573\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.295354 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-ovsdbserver-sb\") pod \"e79df57a-2ba8-4921-ab33-6911ab2e0573\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.295427 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-ovsdbserver-nb\") pod \"e79df57a-2ba8-4921-ab33-6911ab2e0573\" (UID: \"e79df57a-2ba8-4921-ab33-6911ab2e0573\") " Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.308917 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79df57a-2ba8-4921-ab33-6911ab2e0573-kube-api-access-d56kx" (OuterVolumeSpecName: "kube-api-access-d56kx") pod "e79df57a-2ba8-4921-ab33-6911ab2e0573" (UID: "e79df57a-2ba8-4921-ab33-6911ab2e0573"). InnerVolumeSpecName "kube-api-access-d56kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.351959 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-config" (OuterVolumeSpecName: "config") pod "e79df57a-2ba8-4921-ab33-6911ab2e0573" (UID: "e79df57a-2ba8-4921-ab33-6911ab2e0573"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.352012 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e79df57a-2ba8-4921-ab33-6911ab2e0573" (UID: "e79df57a-2ba8-4921-ab33-6911ab2e0573"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.356562 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e79df57a-2ba8-4921-ab33-6911ab2e0573" (UID: "e79df57a-2ba8-4921-ab33-6911ab2e0573"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.358811 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "e79df57a-2ba8-4921-ab33-6911ab2e0573" (UID: "e79df57a-2ba8-4921-ab33-6911ab2e0573"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.360146 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e79df57a-2ba8-4921-ab33-6911ab2e0573" (UID: "e79df57a-2ba8-4921-ab33-6911ab2e0573"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.368344 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e79df57a-2ba8-4921-ab33-6911ab2e0573" (UID: "e79df57a-2ba8-4921-ab33-6911ab2e0573"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.398888 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.398925 4772 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.398936 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.398949 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.398959 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d56kx\" (UniqueName: \"kubernetes.io/projected/e79df57a-2ba8-4921-ab33-6911ab2e0573-kube-api-access-d56kx\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.398970 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.398983 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e79df57a-2ba8-4921-ab33-6911ab2e0573-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.409191 4772 generic.go:334] "Generic (PLEG): container finished" podID="e79df57a-2ba8-4921-ab33-6911ab2e0573" containerID="58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9" exitCode=0 Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.409283 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" event={"ID":"e79df57a-2ba8-4921-ab33-6911ab2e0573","Type":"ContainerDied","Data":"58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9"} Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.409307 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.409546 4772 scope.go:117] "RemoveContainer" containerID="58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.409435 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-mz2ct" event={"ID":"e79df57a-2ba8-4921-ab33-6911ab2e0573","Type":"ContainerDied","Data":"eabda098e46df4f77741e365653bb2f6188cfd455f78202d1556f07afd9bcbe9"} Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.467582 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-mz2ct"] Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.473513 4772 scope.go:117] "RemoveContainer" containerID="03c18c2898efd313c371880fb4262f0dc3147bcff1d96e78c7e43f22f7a2bc51" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.477921 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-mz2ct"] Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.514521 4772 scope.go:117] "RemoveContainer" containerID="58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9" Nov 28 11:29:27 crc kubenswrapper[4772]: E1128 11:29:27.515164 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9\": container with ID starting with 58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9 not found: ID does not exist" containerID="58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.515208 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9"} err="failed to get container status \"58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9\": rpc error: code = NotFound desc = could not find container \"58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9\": container with ID starting with 58e0e81725e6a5f2671925fb9ebe4e9d64b17b26669acd8026a05ef4ed7396d9 not found: ID does not exist" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.515238 4772 scope.go:117] "RemoveContainer" containerID="03c18c2898efd313c371880fb4262f0dc3147bcff1d96e78c7e43f22f7a2bc51" Nov 28 11:29:27 crc kubenswrapper[4772]: E1128 11:29:27.515968 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03c18c2898efd313c371880fb4262f0dc3147bcff1d96e78c7e43f22f7a2bc51\": container with ID starting with 03c18c2898efd313c371880fb4262f0dc3147bcff1d96e78c7e43f22f7a2bc51 not found: ID does not exist" containerID="03c18c2898efd313c371880fb4262f0dc3147bcff1d96e78c7e43f22f7a2bc51" Nov 28 11:29:27 crc kubenswrapper[4772]: I1128 11:29:27.515993 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03c18c2898efd313c371880fb4262f0dc3147bcff1d96e78c7e43f22f7a2bc51"} err="failed to get container status \"03c18c2898efd313c371880fb4262f0dc3147bcff1d96e78c7e43f22f7a2bc51\": rpc error: code = NotFound desc = could not find container \"03c18c2898efd313c371880fb4262f0dc3147bcff1d96e78c7e43f22f7a2bc51\": container with ID starting with 03c18c2898efd313c371880fb4262f0dc3147bcff1d96e78c7e43f22f7a2bc51 not found: ID does not exist" Nov 28 11:29:28 crc kubenswrapper[4772]: I1128 11:29:28.005309 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e79df57a-2ba8-4921-ab33-6911ab2e0573" path="/var/lib/kubelet/pods/e79df57a-2ba8-4921-ab33-6911ab2e0573/volumes" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.568134 4772 generic.go:334] "Generic (PLEG): container finished" podID="9e3b8854-8b5b-441d-97a7-12e48cffafb6" containerID="7fafde5303fd757470cd878782837126b550842e87d7292e2d8a2a1463c0e7e7" exitCode=0 Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.568226 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e3b8854-8b5b-441d-97a7-12e48cffafb6","Type":"ContainerDied","Data":"7fafde5303fd757470cd878782837126b550842e87d7292e2d8a2a1463c0e7e7"} Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.784820 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw"] Nov 28 11:29:39 crc kubenswrapper[4772]: E1128 11:29:39.785900 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79df57a-2ba8-4921-ab33-6911ab2e0573" containerName="dnsmasq-dns" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.785918 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79df57a-2ba8-4921-ab33-6911ab2e0573" containerName="dnsmasq-dns" Nov 28 11:29:39 crc kubenswrapper[4772]: E1128 11:29:39.785978 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d9e80f-25f4-4cc3-b8f7-442760eff45c" containerName="dnsmasq-dns" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.786107 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d9e80f-25f4-4cc3-b8f7-442760eff45c" containerName="dnsmasq-dns" Nov 28 11:29:39 crc kubenswrapper[4772]: E1128 11:29:39.786140 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79df57a-2ba8-4921-ab33-6911ab2e0573" containerName="init" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.786150 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79df57a-2ba8-4921-ab33-6911ab2e0573" containerName="init" Nov 28 11:29:39 crc kubenswrapper[4772]: E1128 11:29:39.786165 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d9e80f-25f4-4cc3-b8f7-442760eff45c" containerName="init" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.786172 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d9e80f-25f4-4cc3-b8f7-442760eff45c" containerName="init" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.786573 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79df57a-2ba8-4921-ab33-6911ab2e0573" containerName="dnsmasq-dns" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.786603 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d9e80f-25f4-4cc3-b8f7-442760eff45c" containerName="dnsmasq-dns" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.787729 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.963158 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.963185 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.963470 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.964031 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:29:39 crc kubenswrapper[4772]: I1128 11:29:39.984114 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw"] Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.070793 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.070864 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.070927 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.070965 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcclr\" (UniqueName: \"kubernetes.io/projected/8a8788a2-2374-4b48-b5c6-f6ab77a21711-kube-api-access-rcclr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.176905 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.177339 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcclr\" (UniqueName: \"kubernetes.io/projected/8a8788a2-2374-4b48-b5c6-f6ab77a21711-kube-api-access-rcclr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.177706 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.177818 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.184951 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.196824 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.197422 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.198253 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcclr\" (UniqueName: \"kubernetes.io/projected/8a8788a2-2374-4b48-b5c6-f6ab77a21711-kube-api-access-rcclr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.290583 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.606385 4772 generic.go:334] "Generic (PLEG): container finished" podID="1a8859cb-c89a-4d2c-ac6b-6abd31388e61" containerID="214d95175dc9ab7386db1fd1995bbfc9d3905385441ff6989f70bbf7a3a03ef0" exitCode=0 Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.606490 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1a8859cb-c89a-4d2c-ac6b-6abd31388e61","Type":"ContainerDied","Data":"214d95175dc9ab7386db1fd1995bbfc9d3905385441ff6989f70bbf7a3a03ef0"} Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.615417 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e3b8854-8b5b-441d-97a7-12e48cffafb6","Type":"ContainerStarted","Data":"90fe95c87f0edb857e55b12a7449f712a106d9beeb8029fd54c659c770b96433"} Nov 28 11:29:40 crc kubenswrapper[4772]: I1128 11:29:40.616243 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:41 crc kubenswrapper[4772]: I1128 11:29:41.064707 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.064680784 podStartE2EDuration="37.064680784s" podCreationTimestamp="2025-11-28 11:29:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:29:40.689659189 +0000 UTC m=+1379.012902466" watchObservedRunningTime="2025-11-28 11:29:41.064680784 +0000 UTC m=+1379.387924011" Nov 28 11:29:41 crc kubenswrapper[4772]: I1128 11:29:41.070301 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw"] Nov 28 11:29:41 crc kubenswrapper[4772]: I1128 11:29:41.644013 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1a8859cb-c89a-4d2c-ac6b-6abd31388e61","Type":"ContainerStarted","Data":"2b55828183b0b7497c1519abce87f2b1b2df51ec2e21beac946c2a8949695ef8"} Nov 28 11:29:41 crc kubenswrapper[4772]: I1128 11:29:41.646202 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 11:29:41 crc kubenswrapper[4772]: I1128 11:29:41.654275 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" event={"ID":"8a8788a2-2374-4b48-b5c6-f6ab77a21711","Type":"ContainerStarted","Data":"f891493fb0b0908ba11a7772cd0c648396f7fe660b41534b1f2f504483d30744"} Nov 28 11:29:41 crc kubenswrapper[4772]: I1128 11:29:41.678940 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.678917496 podStartE2EDuration="37.678917496s" podCreationTimestamp="2025-11-28 11:29:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:29:41.671794512 +0000 UTC m=+1379.995037729" watchObservedRunningTime="2025-11-28 11:29:41.678917496 +0000 UTC m=+1380.002160723" Nov 28 11:29:43 crc kubenswrapper[4772]: I1128 11:29:43.490845 4772 scope.go:117] "RemoveContainer" containerID="ee956eabea4b4338ff07057f9b20ad599a52188d715ed812aea06dde00ec9f7e" Nov 28 11:29:51 crc kubenswrapper[4772]: I1128 11:29:51.092391 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:29:51 crc kubenswrapper[4772]: I1128 11:29:51.813040 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" event={"ID":"8a8788a2-2374-4b48-b5c6-f6ab77a21711","Type":"ContainerStarted","Data":"d0fb3e3a97685a3a85c10d8fc8af108f8623f0af00aec8e71adaddcbcb96b80e"} Nov 28 11:29:51 crc kubenswrapper[4772]: I1128 11:29:51.844091 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" podStartSLOduration=2.837682359 podStartE2EDuration="12.844061929s" podCreationTimestamp="2025-11-28 11:29:39 +0000 UTC" firstStartedPulling="2025-11-28 11:29:41.082658613 +0000 UTC m=+1379.405901840" lastFinishedPulling="2025-11-28 11:29:51.089038183 +0000 UTC m=+1389.412281410" observedRunningTime="2025-11-28 11:29:51.841022837 +0000 UTC m=+1390.164266104" watchObservedRunningTime="2025-11-28 11:29:51.844061929 +0000 UTC m=+1390.167305156" Nov 28 11:29:54 crc kubenswrapper[4772]: I1128 11:29:54.522742 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 11:29:54 crc kubenswrapper[4772]: I1128 11:29:54.572650 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.165547 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w"] Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.168916 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.171210 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.171248 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.176796 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w"] Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.277695 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-config-volume\") pod \"collect-profiles-29405490-lxb9w\" (UID: \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.277883 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-secret-volume\") pod \"collect-profiles-29405490-lxb9w\" (UID: \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.277924 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t6rw\" (UniqueName: \"kubernetes.io/projected/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-kube-api-access-4t6rw\") pod \"collect-profiles-29405490-lxb9w\" (UID: \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.380051 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-secret-volume\") pod \"collect-profiles-29405490-lxb9w\" (UID: \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.380120 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t6rw\" (UniqueName: \"kubernetes.io/projected/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-kube-api-access-4t6rw\") pod \"collect-profiles-29405490-lxb9w\" (UID: \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.380210 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-config-volume\") pod \"collect-profiles-29405490-lxb9w\" (UID: \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.381236 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-config-volume\") pod \"collect-profiles-29405490-lxb9w\" (UID: \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.399386 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-secret-volume\") pod \"collect-profiles-29405490-lxb9w\" (UID: \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.400323 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t6rw\" (UniqueName: \"kubernetes.io/projected/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-kube-api-access-4t6rw\") pod \"collect-profiles-29405490-lxb9w\" (UID: \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:00 crc kubenswrapper[4772]: I1128 11:30:00.509940 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:01 crc kubenswrapper[4772]: I1128 11:30:01.007302 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w"] Nov 28 11:30:01 crc kubenswrapper[4772]: I1128 11:30:01.931260 4772 generic.go:334] "Generic (PLEG): container finished" podID="9dcd8659-535c-4cd7-9a08-f7a67afbadcd" containerID="b854aca9587ea5ff300f348e0a85d24629920c328b3b6510fc8e8cd42323cadc" exitCode=0 Nov 28 11:30:01 crc kubenswrapper[4772]: I1128 11:30:01.931385 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" event={"ID":"9dcd8659-535c-4cd7-9a08-f7a67afbadcd","Type":"ContainerDied","Data":"b854aca9587ea5ff300f348e0a85d24629920c328b3b6510fc8e8cd42323cadc"} Nov 28 11:30:01 crc kubenswrapper[4772]: I1128 11:30:01.931818 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" event={"ID":"9dcd8659-535c-4cd7-9a08-f7a67afbadcd","Type":"ContainerStarted","Data":"e3066481c5a5bd0c59f2f9dc13e999e13f9a3019325b97128241e632af839971"} Nov 28 11:30:02 crc kubenswrapper[4772]: I1128 11:30:02.946685 4772 generic.go:334] "Generic (PLEG): container finished" podID="8a8788a2-2374-4b48-b5c6-f6ab77a21711" containerID="d0fb3e3a97685a3a85c10d8fc8af108f8623f0af00aec8e71adaddcbcb96b80e" exitCode=0 Nov 28 11:30:02 crc kubenswrapper[4772]: I1128 11:30:02.947612 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" event={"ID":"8a8788a2-2374-4b48-b5c6-f6ab77a21711","Type":"ContainerDied","Data":"d0fb3e3a97685a3a85c10d8fc8af108f8623f0af00aec8e71adaddcbcb96b80e"} Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.514555 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.676322 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-config-volume\") pod \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\" (UID: \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\") " Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.676452 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-secret-volume\") pod \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\" (UID: \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\") " Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.676571 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t6rw\" (UniqueName: \"kubernetes.io/projected/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-kube-api-access-4t6rw\") pod \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\" (UID: \"9dcd8659-535c-4cd7-9a08-f7a67afbadcd\") " Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.677521 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-config-volume" (OuterVolumeSpecName: "config-volume") pod "9dcd8659-535c-4cd7-9a08-f7a67afbadcd" (UID: "9dcd8659-535c-4cd7-9a08-f7a67afbadcd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.684640 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9dcd8659-535c-4cd7-9a08-f7a67afbadcd" (UID: "9dcd8659-535c-4cd7-9a08-f7a67afbadcd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.691661 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-kube-api-access-4t6rw" (OuterVolumeSpecName: "kube-api-access-4t6rw") pod "9dcd8659-535c-4cd7-9a08-f7a67afbadcd" (UID: "9dcd8659-535c-4cd7-9a08-f7a67afbadcd"). InnerVolumeSpecName "kube-api-access-4t6rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.778772 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.778819 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t6rw\" (UniqueName: \"kubernetes.io/projected/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-kube-api-access-4t6rw\") on node \"crc\" DevicePath \"\"" Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.778828 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dcd8659-535c-4cd7-9a08-f7a67afbadcd-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.963790 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" event={"ID":"9dcd8659-535c-4cd7-9a08-f7a67afbadcd","Type":"ContainerDied","Data":"e3066481c5a5bd0c59f2f9dc13e999e13f9a3019325b97128241e632af839971"} Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.963880 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3066481c5a5bd0c59f2f9dc13e999e13f9a3019325b97128241e632af839971" Nov 28 11:30:03 crc kubenswrapper[4772]: I1128 11:30:03.963821 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w" Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.481683 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.596247 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-repo-setup-combined-ca-bundle\") pod \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.596329 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-ssh-key\") pod \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.596455 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcclr\" (UniqueName: \"kubernetes.io/projected/8a8788a2-2374-4b48-b5c6-f6ab77a21711-kube-api-access-rcclr\") pod \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.596704 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-inventory\") pod \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\" (UID: \"8a8788a2-2374-4b48-b5c6-f6ab77a21711\") " Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.605737 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8a8788a2-2374-4b48-b5c6-f6ab77a21711" (UID: "8a8788a2-2374-4b48-b5c6-f6ab77a21711"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.605751 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a8788a2-2374-4b48-b5c6-f6ab77a21711-kube-api-access-rcclr" (OuterVolumeSpecName: "kube-api-access-rcclr") pod "8a8788a2-2374-4b48-b5c6-f6ab77a21711" (UID: "8a8788a2-2374-4b48-b5c6-f6ab77a21711"). InnerVolumeSpecName "kube-api-access-rcclr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.632948 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-inventory" (OuterVolumeSpecName: "inventory") pod "8a8788a2-2374-4b48-b5c6-f6ab77a21711" (UID: "8a8788a2-2374-4b48-b5c6-f6ab77a21711"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.639503 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8a8788a2-2374-4b48-b5c6-f6ab77a21711" (UID: "8a8788a2-2374-4b48-b5c6-f6ab77a21711"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.699369 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.699419 4772 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.699437 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a8788a2-2374-4b48-b5c6-f6ab77a21711-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.699450 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcclr\" (UniqueName: \"kubernetes.io/projected/8a8788a2-2374-4b48-b5c6-f6ab77a21711-kube-api-access-rcclr\") on node \"crc\" DevicePath \"\"" Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.978904 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" event={"ID":"8a8788a2-2374-4b48-b5c6-f6ab77a21711","Type":"ContainerDied","Data":"f891493fb0b0908ba11a7772cd0c648396f7fe660b41534b1f2f504483d30744"} Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.978962 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f891493fb0b0908ba11a7772cd0c648396f7fe660b41534b1f2f504483d30744" Nov 28 11:30:04 crc kubenswrapper[4772]: I1128 11:30:04.978976 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.084506 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g"] Nov 28 11:30:05 crc kubenswrapper[4772]: E1128 11:30:05.085039 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8788a2-2374-4b48-b5c6-f6ab77a21711" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.085059 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8788a2-2374-4b48-b5c6-f6ab77a21711" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 11:30:05 crc kubenswrapper[4772]: E1128 11:30:05.085074 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcd8659-535c-4cd7-9a08-f7a67afbadcd" containerName="collect-profiles" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.085082 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcd8659-535c-4cd7-9a08-f7a67afbadcd" containerName="collect-profiles" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.085280 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dcd8659-535c-4cd7-9a08-f7a67afbadcd" containerName="collect-profiles" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.085312 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a8788a2-2374-4b48-b5c6-f6ab77a21711" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.087601 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.091051 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.098917 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.099325 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.099618 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.115085 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g"] Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.223819 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62194471-8bd2-46f1-9891-e2bfe7cf8d67-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dkl6g\" (UID: \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.223926 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sq8c\" (UniqueName: \"kubernetes.io/projected/62194471-8bd2-46f1-9891-e2bfe7cf8d67-kube-api-access-4sq8c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dkl6g\" (UID: \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.224037 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62194471-8bd2-46f1-9891-e2bfe7cf8d67-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dkl6g\" (UID: \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.326523 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62194471-8bd2-46f1-9891-e2bfe7cf8d67-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dkl6g\" (UID: \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.326651 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62194471-8bd2-46f1-9891-e2bfe7cf8d67-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dkl6g\" (UID: \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.326716 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sq8c\" (UniqueName: \"kubernetes.io/projected/62194471-8bd2-46f1-9891-e2bfe7cf8d67-kube-api-access-4sq8c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dkl6g\" (UID: \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.333108 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62194471-8bd2-46f1-9891-e2bfe7cf8d67-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dkl6g\" (UID: \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.333751 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62194471-8bd2-46f1-9891-e2bfe7cf8d67-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dkl6g\" (UID: \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.345894 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sq8c\" (UniqueName: \"kubernetes.io/projected/62194471-8bd2-46f1-9891-e2bfe7cf8d67-kube-api-access-4sq8c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dkl6g\" (UID: \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:05 crc kubenswrapper[4772]: I1128 11:30:05.414996 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:06 crc kubenswrapper[4772]: I1128 11:30:06.054825 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g"] Nov 28 11:30:06 crc kubenswrapper[4772]: W1128 11:30:06.058443 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62194471_8bd2_46f1_9891_e2bfe7cf8d67.slice/crio-1d26bfd7627d49cd8f1720fb8ac52a9b9275764712a6c38ab88a0d78751dac0a WatchSource:0}: Error finding container 1d26bfd7627d49cd8f1720fb8ac52a9b9275764712a6c38ab88a0d78751dac0a: Status 404 returned error can't find the container with id 1d26bfd7627d49cd8f1720fb8ac52a9b9275764712a6c38ab88a0d78751dac0a Nov 28 11:30:07 crc kubenswrapper[4772]: I1128 11:30:07.010027 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" event={"ID":"62194471-8bd2-46f1-9891-e2bfe7cf8d67","Type":"ContainerStarted","Data":"2def8f152009be04903c3db002f67a5def2778a6bf7373674f83fca6e6e24a27"} Nov 28 11:30:07 crc kubenswrapper[4772]: I1128 11:30:07.010384 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" event={"ID":"62194471-8bd2-46f1-9891-e2bfe7cf8d67","Type":"ContainerStarted","Data":"1d26bfd7627d49cd8f1720fb8ac52a9b9275764712a6c38ab88a0d78751dac0a"} Nov 28 11:30:07 crc kubenswrapper[4772]: I1128 11:30:07.042769 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" podStartSLOduration=1.540925502 podStartE2EDuration="2.042743541s" podCreationTimestamp="2025-11-28 11:30:05 +0000 UTC" firstStartedPulling="2025-11-28 11:30:06.062715845 +0000 UTC m=+1404.385959062" lastFinishedPulling="2025-11-28 11:30:06.564533854 +0000 UTC m=+1404.887777101" observedRunningTime="2025-11-28 11:30:07.03901526 +0000 UTC m=+1405.362258477" watchObservedRunningTime="2025-11-28 11:30:07.042743541 +0000 UTC m=+1405.365986768" Nov 28 11:30:10 crc kubenswrapper[4772]: I1128 11:30:10.049603 4772 generic.go:334] "Generic (PLEG): container finished" podID="62194471-8bd2-46f1-9891-e2bfe7cf8d67" containerID="2def8f152009be04903c3db002f67a5def2778a6bf7373674f83fca6e6e24a27" exitCode=0 Nov 28 11:30:10 crc kubenswrapper[4772]: I1128 11:30:10.049700 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" event={"ID":"62194471-8bd2-46f1-9891-e2bfe7cf8d67","Type":"ContainerDied","Data":"2def8f152009be04903c3db002f67a5def2778a6bf7373674f83fca6e6e24a27"} Nov 28 11:30:11 crc kubenswrapper[4772]: I1128 11:30:11.589890 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:11 crc kubenswrapper[4772]: I1128 11:30:11.686197 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62194471-8bd2-46f1-9891-e2bfe7cf8d67-inventory\") pod \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\" (UID: \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\") " Nov 28 11:30:11 crc kubenswrapper[4772]: I1128 11:30:11.686534 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62194471-8bd2-46f1-9891-e2bfe7cf8d67-ssh-key\") pod \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\" (UID: \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\") " Nov 28 11:30:11 crc kubenswrapper[4772]: I1128 11:30:11.686718 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sq8c\" (UniqueName: \"kubernetes.io/projected/62194471-8bd2-46f1-9891-e2bfe7cf8d67-kube-api-access-4sq8c\") pod \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\" (UID: \"62194471-8bd2-46f1-9891-e2bfe7cf8d67\") " Nov 28 11:30:11 crc kubenswrapper[4772]: I1128 11:30:11.693100 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62194471-8bd2-46f1-9891-e2bfe7cf8d67-kube-api-access-4sq8c" (OuterVolumeSpecName: "kube-api-access-4sq8c") pod "62194471-8bd2-46f1-9891-e2bfe7cf8d67" (UID: "62194471-8bd2-46f1-9891-e2bfe7cf8d67"). InnerVolumeSpecName "kube-api-access-4sq8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:30:11 crc kubenswrapper[4772]: I1128 11:30:11.724876 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62194471-8bd2-46f1-9891-e2bfe7cf8d67-inventory" (OuterVolumeSpecName: "inventory") pod "62194471-8bd2-46f1-9891-e2bfe7cf8d67" (UID: "62194471-8bd2-46f1-9891-e2bfe7cf8d67"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:30:11 crc kubenswrapper[4772]: I1128 11:30:11.737707 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62194471-8bd2-46f1-9891-e2bfe7cf8d67-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "62194471-8bd2-46f1-9891-e2bfe7cf8d67" (UID: "62194471-8bd2-46f1-9891-e2bfe7cf8d67"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:30:11 crc kubenswrapper[4772]: I1128 11:30:11.790314 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62194471-8bd2-46f1-9891-e2bfe7cf8d67-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:30:11 crc kubenswrapper[4772]: I1128 11:30:11.790679 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62194471-8bd2-46f1-9891-e2bfe7cf8d67-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:30:11 crc kubenswrapper[4772]: I1128 11:30:11.790689 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sq8c\" (UniqueName: \"kubernetes.io/projected/62194471-8bd2-46f1-9891-e2bfe7cf8d67-kube-api-access-4sq8c\") on node \"crc\" DevicePath \"\"" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.070231 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" event={"ID":"62194471-8bd2-46f1-9891-e2bfe7cf8d67","Type":"ContainerDied","Data":"1d26bfd7627d49cd8f1720fb8ac52a9b9275764712a6c38ab88a0d78751dac0a"} Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.070552 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d26bfd7627d49cd8f1720fb8ac52a9b9275764712a6c38ab88a0d78751dac0a" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.070380 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dkl6g" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.191848 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt"] Nov 28 11:30:12 crc kubenswrapper[4772]: E1128 11:30:12.192384 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62194471-8bd2-46f1-9891-e2bfe7cf8d67" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.192409 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="62194471-8bd2-46f1-9891-e2bfe7cf8d67" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.192721 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="62194471-8bd2-46f1-9891-e2bfe7cf8d67" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.193605 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.200330 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.204493 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.204540 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.204968 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.218554 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt"] Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.301197 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd8q9\" (UniqueName: \"kubernetes.io/projected/ab57579b-65be-4ef3-977f-574ca00f3d9a-kube-api-access-gd8q9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.301251 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.301739 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.302085 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.440369 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.440481 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.440558 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd8q9\" (UniqueName: \"kubernetes.io/projected/ab57579b-65be-4ef3-977f-574ca00f3d9a-kube-api-access-gd8q9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.440583 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.444804 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.444883 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.445018 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.473054 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd8q9\" (UniqueName: \"kubernetes.io/projected/ab57579b-65be-4ef3-977f-574ca00f3d9a-kube-api-access-gd8q9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:12 crc kubenswrapper[4772]: I1128 11:30:12.521733 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:30:13 crc kubenswrapper[4772]: I1128 11:30:13.190592 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt"] Nov 28 11:30:14 crc kubenswrapper[4772]: I1128 11:30:14.095761 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" event={"ID":"ab57579b-65be-4ef3-977f-574ca00f3d9a","Type":"ContainerStarted","Data":"c306a4f6eb37214a9c2d73d9a1bb70b065d10192cc9577a139fe0aed7a414b66"} Nov 28 11:30:14 crc kubenswrapper[4772]: I1128 11:30:14.096180 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" event={"ID":"ab57579b-65be-4ef3-977f-574ca00f3d9a","Type":"ContainerStarted","Data":"4c09f022572452f97bc7c5286d1dce8da769b50c53c387e5b93a9ca47c998a73"} Nov 28 11:30:14 crc kubenswrapper[4772]: I1128 11:30:14.123822 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" podStartSLOduration=1.61161864 podStartE2EDuration="2.123800441s" podCreationTimestamp="2025-11-28 11:30:12 +0000 UTC" firstStartedPulling="2025-11-28 11:30:13.200011892 +0000 UTC m=+1411.523255119" lastFinishedPulling="2025-11-28 11:30:13.712193693 +0000 UTC m=+1412.035436920" observedRunningTime="2025-11-28 11:30:14.118270821 +0000 UTC m=+1412.441514048" watchObservedRunningTime="2025-11-28 11:30:14.123800441 +0000 UTC m=+1412.447043668" Nov 28 11:30:23 crc kubenswrapper[4772]: I1128 11:30:23.897295 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:30:23 crc kubenswrapper[4772]: I1128 11:30:23.898103 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:30:43 crc kubenswrapper[4772]: I1128 11:30:43.623677 4772 scope.go:117] "RemoveContainer" containerID="ae4a513967566458793783d28fc7842e113f09eef0def36ee1433b7011aa8228" Nov 28 11:30:43 crc kubenswrapper[4772]: I1128 11:30:43.690794 4772 scope.go:117] "RemoveContainer" containerID="e9087979407f97b11ae8526798f63b1fcdb093303aa3e09e4c1a3d05120aeca3" Nov 28 11:30:53 crc kubenswrapper[4772]: I1128 11:30:53.896540 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:30:53 crc kubenswrapper[4772]: I1128 11:30:53.897323 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:31:23 crc kubenswrapper[4772]: I1128 11:31:23.897040 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:31:23 crc kubenswrapper[4772]: I1128 11:31:23.898007 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:31:23 crc kubenswrapper[4772]: I1128 11:31:23.898079 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:31:23 crc kubenswrapper[4772]: I1128 11:31:23.899181 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4940b44c04e1b9bb359aa7bb4ce020bd708399be9f1195fe75c4b24f11ffd061"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:31:23 crc kubenswrapper[4772]: I1128 11:31:23.899249 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://4940b44c04e1b9bb359aa7bb4ce020bd708399be9f1195fe75c4b24f11ffd061" gracePeriod=600 Nov 28 11:31:24 crc kubenswrapper[4772]: I1128 11:31:24.331100 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="4940b44c04e1b9bb359aa7bb4ce020bd708399be9f1195fe75c4b24f11ffd061" exitCode=0 Nov 28 11:31:24 crc kubenswrapper[4772]: I1128 11:31:24.331174 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"4940b44c04e1b9bb359aa7bb4ce020bd708399be9f1195fe75c4b24f11ffd061"} Nov 28 11:31:24 crc kubenswrapper[4772]: I1128 11:31:24.331585 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f"} Nov 28 11:31:24 crc kubenswrapper[4772]: I1128 11:31:24.331630 4772 scope.go:117] "RemoveContainer" containerID="da141ddbcec3414566d62a4d52bc506fe611de913cb5db4b82768e912f8c16c0" Nov 28 11:31:43 crc kubenswrapper[4772]: I1128 11:31:43.778990 4772 scope.go:117] "RemoveContainer" containerID="76ffd2b9c18295f3f278a4a8d1b93bbca59982d3fdf63509e8d4bde76968d8c0" Nov 28 11:31:43 crc kubenswrapper[4772]: I1128 11:31:43.841579 4772 scope.go:117] "RemoveContainer" containerID="471d4641727ce30c311b0515c91ce0acd3b44af77ea44a7e0c2c094f8f400d2e" Nov 28 11:31:43 crc kubenswrapper[4772]: I1128 11:31:43.883025 4772 scope.go:117] "RemoveContainer" containerID="b93ccb2edc1836e9818c9445fe9e47601947eadf07f7f46f8149c62f9e8c595a" Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.391932 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r4xrb"] Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.394717 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.407061 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5853f23c-4d38-4331-bca1-d80337681c97-catalog-content\") pod \"certified-operators-r4xrb\" (UID: \"5853f23c-4d38-4331-bca1-d80337681c97\") " pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.407141 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkz4v\" (UniqueName: \"kubernetes.io/projected/5853f23c-4d38-4331-bca1-d80337681c97-kube-api-access-gkz4v\") pod \"certified-operators-r4xrb\" (UID: \"5853f23c-4d38-4331-bca1-d80337681c97\") " pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.407289 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5853f23c-4d38-4331-bca1-d80337681c97-utilities\") pod \"certified-operators-r4xrb\" (UID: \"5853f23c-4d38-4331-bca1-d80337681c97\") " pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.457656 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4xrb"] Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.508533 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5853f23c-4d38-4331-bca1-d80337681c97-catalog-content\") pod \"certified-operators-r4xrb\" (UID: \"5853f23c-4d38-4331-bca1-d80337681c97\") " pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.508594 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkz4v\" (UniqueName: \"kubernetes.io/projected/5853f23c-4d38-4331-bca1-d80337681c97-kube-api-access-gkz4v\") pod \"certified-operators-r4xrb\" (UID: \"5853f23c-4d38-4331-bca1-d80337681c97\") " pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.508653 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5853f23c-4d38-4331-bca1-d80337681c97-utilities\") pod \"certified-operators-r4xrb\" (UID: \"5853f23c-4d38-4331-bca1-d80337681c97\") " pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.509288 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5853f23c-4d38-4331-bca1-d80337681c97-utilities\") pod \"certified-operators-r4xrb\" (UID: \"5853f23c-4d38-4331-bca1-d80337681c97\") " pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.509479 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5853f23c-4d38-4331-bca1-d80337681c97-catalog-content\") pod \"certified-operators-r4xrb\" (UID: \"5853f23c-4d38-4331-bca1-d80337681c97\") " pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.536702 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkz4v\" (UniqueName: \"kubernetes.io/projected/5853f23c-4d38-4331-bca1-d80337681c97-kube-api-access-gkz4v\") pod \"certified-operators-r4xrb\" (UID: \"5853f23c-4d38-4331-bca1-d80337681c97\") " pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:07 crc kubenswrapper[4772]: I1128 11:32:07.771005 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:08 crc kubenswrapper[4772]: I1128 11:32:08.306119 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4xrb"] Nov 28 11:32:08 crc kubenswrapper[4772]: I1128 11:32:08.947088 4772 generic.go:334] "Generic (PLEG): container finished" podID="5853f23c-4d38-4331-bca1-d80337681c97" containerID="fe850a2b17714c7432d188769fac32eb2bf702db6aa2dd92faf59feafddcd268" exitCode=0 Nov 28 11:32:08 crc kubenswrapper[4772]: I1128 11:32:08.947162 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xrb" event={"ID":"5853f23c-4d38-4331-bca1-d80337681c97","Type":"ContainerDied","Data":"fe850a2b17714c7432d188769fac32eb2bf702db6aa2dd92faf59feafddcd268"} Nov 28 11:32:08 crc kubenswrapper[4772]: I1128 11:32:08.947208 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xrb" event={"ID":"5853f23c-4d38-4331-bca1-d80337681c97","Type":"ContainerStarted","Data":"30d12ca814aec637541e527b1b64b5296abc81e4c64b5ab5439f37c0dabea89f"} Nov 28 11:32:10 crc kubenswrapper[4772]: I1128 11:32:10.990087 4772 generic.go:334] "Generic (PLEG): container finished" podID="5853f23c-4d38-4331-bca1-d80337681c97" containerID="ae8b22abd26fac27343a3cc8b416d40e8367a7164150820395b95c94328ad62c" exitCode=0 Nov 28 11:32:10 crc kubenswrapper[4772]: I1128 11:32:10.990179 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xrb" event={"ID":"5853f23c-4d38-4331-bca1-d80337681c97","Type":"ContainerDied","Data":"ae8b22abd26fac27343a3cc8b416d40e8367a7164150820395b95c94328ad62c"} Nov 28 11:32:13 crc kubenswrapper[4772]: I1128 11:32:13.018272 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xrb" event={"ID":"5853f23c-4d38-4331-bca1-d80337681c97","Type":"ContainerStarted","Data":"055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829"} Nov 28 11:32:13 crc kubenswrapper[4772]: I1128 11:32:13.044117 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r4xrb" podStartSLOduration=2.61616589 podStartE2EDuration="6.044091301s" podCreationTimestamp="2025-11-28 11:32:07 +0000 UTC" firstStartedPulling="2025-11-28 11:32:08.950901981 +0000 UTC m=+1527.274145238" lastFinishedPulling="2025-11-28 11:32:12.378827382 +0000 UTC m=+1530.702070649" observedRunningTime="2025-11-28 11:32:13.039123076 +0000 UTC m=+1531.362366323" watchObservedRunningTime="2025-11-28 11:32:13.044091301 +0000 UTC m=+1531.367334538" Nov 28 11:32:17 crc kubenswrapper[4772]: I1128 11:32:17.771692 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:17 crc kubenswrapper[4772]: I1128 11:32:17.772837 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:17 crc kubenswrapper[4772]: I1128 11:32:17.849651 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:18 crc kubenswrapper[4772]: I1128 11:32:18.140684 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:18 crc kubenswrapper[4772]: I1128 11:32:18.223342 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4xrb"] Nov 28 11:32:20 crc kubenswrapper[4772]: I1128 11:32:20.105703 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r4xrb" podUID="5853f23c-4d38-4331-bca1-d80337681c97" containerName="registry-server" containerID="cri-o://055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829" gracePeriod=2 Nov 28 11:32:20 crc kubenswrapper[4772]: E1128 11:32:20.386529 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5853f23c_4d38_4331_bca1_d80337681c97.slice/crio-conmon-055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5853f23c_4d38_4331_bca1_d80337681c97.slice/crio-055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829.scope\": RecentStats: unable to find data in memory cache]" Nov 28 11:32:20 crc kubenswrapper[4772]: I1128 11:32:20.632826 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:20 crc kubenswrapper[4772]: I1128 11:32:20.791893 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5853f23c-4d38-4331-bca1-d80337681c97-utilities\") pod \"5853f23c-4d38-4331-bca1-d80337681c97\" (UID: \"5853f23c-4d38-4331-bca1-d80337681c97\") " Nov 28 11:32:20 crc kubenswrapper[4772]: I1128 11:32:20.792566 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkz4v\" (UniqueName: \"kubernetes.io/projected/5853f23c-4d38-4331-bca1-d80337681c97-kube-api-access-gkz4v\") pod \"5853f23c-4d38-4331-bca1-d80337681c97\" (UID: \"5853f23c-4d38-4331-bca1-d80337681c97\") " Nov 28 11:32:20 crc kubenswrapper[4772]: I1128 11:32:20.792968 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5853f23c-4d38-4331-bca1-d80337681c97-catalog-content\") pod \"5853f23c-4d38-4331-bca1-d80337681c97\" (UID: \"5853f23c-4d38-4331-bca1-d80337681c97\") " Nov 28 11:32:20 crc kubenswrapper[4772]: I1128 11:32:20.793173 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5853f23c-4d38-4331-bca1-d80337681c97-utilities" (OuterVolumeSpecName: "utilities") pod "5853f23c-4d38-4331-bca1-d80337681c97" (UID: "5853f23c-4d38-4331-bca1-d80337681c97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:32:20 crc kubenswrapper[4772]: I1128 11:32:20.794239 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5853f23c-4d38-4331-bca1-d80337681c97-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:32:20 crc kubenswrapper[4772]: I1128 11:32:20.800532 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5853f23c-4d38-4331-bca1-d80337681c97-kube-api-access-gkz4v" (OuterVolumeSpecName: "kube-api-access-gkz4v") pod "5853f23c-4d38-4331-bca1-d80337681c97" (UID: "5853f23c-4d38-4331-bca1-d80337681c97"). InnerVolumeSpecName "kube-api-access-gkz4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:32:20 crc kubenswrapper[4772]: I1128 11:32:20.856245 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5853f23c-4d38-4331-bca1-d80337681c97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5853f23c-4d38-4331-bca1-d80337681c97" (UID: "5853f23c-4d38-4331-bca1-d80337681c97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:32:20 crc kubenswrapper[4772]: I1128 11:32:20.897243 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkz4v\" (UniqueName: \"kubernetes.io/projected/5853f23c-4d38-4331-bca1-d80337681c97-kube-api-access-gkz4v\") on node \"crc\" DevicePath \"\"" Nov 28 11:32:20 crc kubenswrapper[4772]: I1128 11:32:20.897344 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5853f23c-4d38-4331-bca1-d80337681c97-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.128494 4772 generic.go:334] "Generic (PLEG): container finished" podID="5853f23c-4d38-4331-bca1-d80337681c97" containerID="055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829" exitCode=0 Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.128570 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xrb" event={"ID":"5853f23c-4d38-4331-bca1-d80337681c97","Type":"ContainerDied","Data":"055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829"} Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.128628 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xrb" event={"ID":"5853f23c-4d38-4331-bca1-d80337681c97","Type":"ContainerDied","Data":"30d12ca814aec637541e527b1b64b5296abc81e4c64b5ab5439f37c0dabea89f"} Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.128662 4772 scope.go:117] "RemoveContainer" containerID="055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829" Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.128768 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4xrb" Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.161953 4772 scope.go:117] "RemoveContainer" containerID="ae8b22abd26fac27343a3cc8b416d40e8367a7164150820395b95c94328ad62c" Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.190993 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4xrb"] Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.211801 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r4xrb"] Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.215409 4772 scope.go:117] "RemoveContainer" containerID="fe850a2b17714c7432d188769fac32eb2bf702db6aa2dd92faf59feafddcd268" Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.271806 4772 scope.go:117] "RemoveContainer" containerID="055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829" Nov 28 11:32:21 crc kubenswrapper[4772]: E1128 11:32:21.272489 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829\": container with ID starting with 055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829 not found: ID does not exist" containerID="055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829" Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.272562 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829"} err="failed to get container status \"055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829\": rpc error: code = NotFound desc = could not find container \"055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829\": container with ID starting with 055ae0f5d5e7d6211fd6594d05b73e7d7345ef462f41bf8acea14252fa6d6829 not found: ID does not exist" Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.272605 4772 scope.go:117] "RemoveContainer" containerID="ae8b22abd26fac27343a3cc8b416d40e8367a7164150820395b95c94328ad62c" Nov 28 11:32:21 crc kubenswrapper[4772]: E1128 11:32:21.273163 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae8b22abd26fac27343a3cc8b416d40e8367a7164150820395b95c94328ad62c\": container with ID starting with ae8b22abd26fac27343a3cc8b416d40e8367a7164150820395b95c94328ad62c not found: ID does not exist" containerID="ae8b22abd26fac27343a3cc8b416d40e8367a7164150820395b95c94328ad62c" Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.273232 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8b22abd26fac27343a3cc8b416d40e8367a7164150820395b95c94328ad62c"} err="failed to get container status \"ae8b22abd26fac27343a3cc8b416d40e8367a7164150820395b95c94328ad62c\": rpc error: code = NotFound desc = could not find container \"ae8b22abd26fac27343a3cc8b416d40e8367a7164150820395b95c94328ad62c\": container with ID starting with ae8b22abd26fac27343a3cc8b416d40e8367a7164150820395b95c94328ad62c not found: ID does not exist" Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.273276 4772 scope.go:117] "RemoveContainer" containerID="fe850a2b17714c7432d188769fac32eb2bf702db6aa2dd92faf59feafddcd268" Nov 28 11:32:21 crc kubenswrapper[4772]: E1128 11:32:21.273996 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe850a2b17714c7432d188769fac32eb2bf702db6aa2dd92faf59feafddcd268\": container with ID starting with fe850a2b17714c7432d188769fac32eb2bf702db6aa2dd92faf59feafddcd268 not found: ID does not exist" containerID="fe850a2b17714c7432d188769fac32eb2bf702db6aa2dd92faf59feafddcd268" Nov 28 11:32:21 crc kubenswrapper[4772]: I1128 11:32:21.274051 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe850a2b17714c7432d188769fac32eb2bf702db6aa2dd92faf59feafddcd268"} err="failed to get container status \"fe850a2b17714c7432d188769fac32eb2bf702db6aa2dd92faf59feafddcd268\": rpc error: code = NotFound desc = could not find container \"fe850a2b17714c7432d188769fac32eb2bf702db6aa2dd92faf59feafddcd268\": container with ID starting with fe850a2b17714c7432d188769fac32eb2bf702db6aa2dd92faf59feafddcd268 not found: ID does not exist" Nov 28 11:32:22 crc kubenswrapper[4772]: I1128 11:32:22.008720 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5853f23c-4d38-4331-bca1-d80337681c97" path="/var/lib/kubelet/pods/5853f23c-4d38-4331-bca1-d80337681c97/volumes" Nov 28 11:32:32 crc kubenswrapper[4772]: I1128 11:32:32.856749 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tzw8t"] Nov 28 11:32:32 crc kubenswrapper[4772]: E1128 11:32:32.858179 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5853f23c-4d38-4331-bca1-d80337681c97" containerName="extract-content" Nov 28 11:32:32 crc kubenswrapper[4772]: I1128 11:32:32.858204 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5853f23c-4d38-4331-bca1-d80337681c97" containerName="extract-content" Nov 28 11:32:32 crc kubenswrapper[4772]: E1128 11:32:32.858255 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5853f23c-4d38-4331-bca1-d80337681c97" containerName="registry-server" Nov 28 11:32:32 crc kubenswrapper[4772]: I1128 11:32:32.858269 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5853f23c-4d38-4331-bca1-d80337681c97" containerName="registry-server" Nov 28 11:32:32 crc kubenswrapper[4772]: E1128 11:32:32.858328 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5853f23c-4d38-4331-bca1-d80337681c97" containerName="extract-utilities" Nov 28 11:32:32 crc kubenswrapper[4772]: I1128 11:32:32.858343 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5853f23c-4d38-4331-bca1-d80337681c97" containerName="extract-utilities" Nov 28 11:32:32 crc kubenswrapper[4772]: I1128 11:32:32.858736 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5853f23c-4d38-4331-bca1-d80337681c97" containerName="registry-server" Nov 28 11:32:32 crc kubenswrapper[4772]: I1128 11:32:32.861410 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:32 crc kubenswrapper[4772]: I1128 11:32:32.869581 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tzw8t"] Nov 28 11:32:32 crc kubenswrapper[4772]: I1128 11:32:32.994463 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7jnx\" (UniqueName: \"kubernetes.io/projected/982031e7-7d11-48ed-9fc8-377b935f80ce-kube-api-access-h7jnx\") pod \"redhat-operators-tzw8t\" (UID: \"982031e7-7d11-48ed-9fc8-377b935f80ce\") " pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:32 crc kubenswrapper[4772]: I1128 11:32:32.994598 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982031e7-7d11-48ed-9fc8-377b935f80ce-catalog-content\") pod \"redhat-operators-tzw8t\" (UID: \"982031e7-7d11-48ed-9fc8-377b935f80ce\") " pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:32 crc kubenswrapper[4772]: I1128 11:32:32.997700 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982031e7-7d11-48ed-9fc8-377b935f80ce-utilities\") pod \"redhat-operators-tzw8t\" (UID: \"982031e7-7d11-48ed-9fc8-377b935f80ce\") " pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:33 crc kubenswrapper[4772]: I1128 11:32:33.100429 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982031e7-7d11-48ed-9fc8-377b935f80ce-utilities\") pod \"redhat-operators-tzw8t\" (UID: \"982031e7-7d11-48ed-9fc8-377b935f80ce\") " pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:33 crc kubenswrapper[4772]: I1128 11:32:33.100547 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7jnx\" (UniqueName: \"kubernetes.io/projected/982031e7-7d11-48ed-9fc8-377b935f80ce-kube-api-access-h7jnx\") pod \"redhat-operators-tzw8t\" (UID: \"982031e7-7d11-48ed-9fc8-377b935f80ce\") " pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:33 crc kubenswrapper[4772]: I1128 11:32:33.100658 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982031e7-7d11-48ed-9fc8-377b935f80ce-catalog-content\") pod \"redhat-operators-tzw8t\" (UID: \"982031e7-7d11-48ed-9fc8-377b935f80ce\") " pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:33 crc kubenswrapper[4772]: I1128 11:32:33.101465 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982031e7-7d11-48ed-9fc8-377b935f80ce-catalog-content\") pod \"redhat-operators-tzw8t\" (UID: \"982031e7-7d11-48ed-9fc8-377b935f80ce\") " pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:33 crc kubenswrapper[4772]: I1128 11:32:33.102261 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982031e7-7d11-48ed-9fc8-377b935f80ce-utilities\") pod \"redhat-operators-tzw8t\" (UID: \"982031e7-7d11-48ed-9fc8-377b935f80ce\") " pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:33 crc kubenswrapper[4772]: I1128 11:32:33.128925 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7jnx\" (UniqueName: \"kubernetes.io/projected/982031e7-7d11-48ed-9fc8-377b935f80ce-kube-api-access-h7jnx\") pod \"redhat-operators-tzw8t\" (UID: \"982031e7-7d11-48ed-9fc8-377b935f80ce\") " pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:33 crc kubenswrapper[4772]: I1128 11:32:33.201408 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:33 crc kubenswrapper[4772]: I1128 11:32:33.776039 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tzw8t"] Nov 28 11:32:34 crc kubenswrapper[4772]: I1128 11:32:34.310485 4772 generic.go:334] "Generic (PLEG): container finished" podID="982031e7-7d11-48ed-9fc8-377b935f80ce" containerID="6a71cf0caf7bb6508a786b3f10806854e77f574e08a4c837799be4ca67c8ef35" exitCode=0 Nov 28 11:32:34 crc kubenswrapper[4772]: I1128 11:32:34.310569 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzw8t" event={"ID":"982031e7-7d11-48ed-9fc8-377b935f80ce","Type":"ContainerDied","Data":"6a71cf0caf7bb6508a786b3f10806854e77f574e08a4c837799be4ca67c8ef35"} Nov 28 11:32:34 crc kubenswrapper[4772]: I1128 11:32:34.310610 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzw8t" event={"ID":"982031e7-7d11-48ed-9fc8-377b935f80ce","Type":"ContainerStarted","Data":"cb1f803cccfc80811bd62eaa56c083c4496712182fef1ed324b3e73edadcfa97"} Nov 28 11:32:34 crc kubenswrapper[4772]: I1128 11:32:34.313808 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 11:32:36 crc kubenswrapper[4772]: I1128 11:32:36.344579 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzw8t" event={"ID":"982031e7-7d11-48ed-9fc8-377b935f80ce","Type":"ContainerStarted","Data":"d815be2c2262097b7b308577de2d8ad32c34bc1b58af8f278dc94d1cc2efa328"} Nov 28 11:32:37 crc kubenswrapper[4772]: I1128 11:32:37.361464 4772 generic.go:334] "Generic (PLEG): container finished" podID="982031e7-7d11-48ed-9fc8-377b935f80ce" containerID="d815be2c2262097b7b308577de2d8ad32c34bc1b58af8f278dc94d1cc2efa328" exitCode=0 Nov 28 11:32:37 crc kubenswrapper[4772]: I1128 11:32:37.361589 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzw8t" event={"ID":"982031e7-7d11-48ed-9fc8-377b935f80ce","Type":"ContainerDied","Data":"d815be2c2262097b7b308577de2d8ad32c34bc1b58af8f278dc94d1cc2efa328"} Nov 28 11:32:38 crc kubenswrapper[4772]: I1128 11:32:38.378838 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzw8t" event={"ID":"982031e7-7d11-48ed-9fc8-377b935f80ce","Type":"ContainerStarted","Data":"1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef"} Nov 28 11:32:38 crc kubenswrapper[4772]: I1128 11:32:38.416394 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tzw8t" podStartSLOduration=2.666238371 podStartE2EDuration="6.416342204s" podCreationTimestamp="2025-11-28 11:32:32 +0000 UTC" firstStartedPulling="2025-11-28 11:32:34.313501231 +0000 UTC m=+1552.636744458" lastFinishedPulling="2025-11-28 11:32:38.063605054 +0000 UTC m=+1556.386848291" observedRunningTime="2025-11-28 11:32:38.404478182 +0000 UTC m=+1556.727721439" watchObservedRunningTime="2025-11-28 11:32:38.416342204 +0000 UTC m=+1556.739585471" Nov 28 11:32:43 crc kubenswrapper[4772]: I1128 11:32:43.201886 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:43 crc kubenswrapper[4772]: I1128 11:32:43.202531 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:43 crc kubenswrapper[4772]: I1128 11:32:43.984998 4772 scope.go:117] "RemoveContainer" containerID="0effa7f2118567a9fe1f4e461ef0ac8c4b63a48e0701f358ff39fe260776ff23" Nov 28 11:32:44 crc kubenswrapper[4772]: I1128 11:32:44.258134 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzw8t" podUID="982031e7-7d11-48ed-9fc8-377b935f80ce" containerName="registry-server" probeResult="failure" output=< Nov 28 11:32:44 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 28 11:32:44 crc kubenswrapper[4772]: > Nov 28 11:32:44 crc kubenswrapper[4772]: I1128 11:32:44.522939 4772 scope.go:117] "RemoveContainer" containerID="26b9cdd0335530b93f409ca9276bdde27359129391ab7f466805ca1c260b2aa9" Nov 28 11:32:53 crc kubenswrapper[4772]: I1128 11:32:53.298263 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:53 crc kubenswrapper[4772]: I1128 11:32:53.376019 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:53 crc kubenswrapper[4772]: I1128 11:32:53.560469 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tzw8t"] Nov 28 11:32:54 crc kubenswrapper[4772]: I1128 11:32:54.574516 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tzw8t" podUID="982031e7-7d11-48ed-9fc8-377b935f80ce" containerName="registry-server" containerID="cri-o://1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef" gracePeriod=2 Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.162635 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.233837 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982031e7-7d11-48ed-9fc8-377b935f80ce-catalog-content\") pod \"982031e7-7d11-48ed-9fc8-377b935f80ce\" (UID: \"982031e7-7d11-48ed-9fc8-377b935f80ce\") " Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.234116 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7jnx\" (UniqueName: \"kubernetes.io/projected/982031e7-7d11-48ed-9fc8-377b935f80ce-kube-api-access-h7jnx\") pod \"982031e7-7d11-48ed-9fc8-377b935f80ce\" (UID: \"982031e7-7d11-48ed-9fc8-377b935f80ce\") " Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.234198 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982031e7-7d11-48ed-9fc8-377b935f80ce-utilities\") pod \"982031e7-7d11-48ed-9fc8-377b935f80ce\" (UID: \"982031e7-7d11-48ed-9fc8-377b935f80ce\") " Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.236430 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982031e7-7d11-48ed-9fc8-377b935f80ce-utilities" (OuterVolumeSpecName: "utilities") pod "982031e7-7d11-48ed-9fc8-377b935f80ce" (UID: "982031e7-7d11-48ed-9fc8-377b935f80ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.242933 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/982031e7-7d11-48ed-9fc8-377b935f80ce-kube-api-access-h7jnx" (OuterVolumeSpecName: "kube-api-access-h7jnx") pod "982031e7-7d11-48ed-9fc8-377b935f80ce" (UID: "982031e7-7d11-48ed-9fc8-377b935f80ce"). InnerVolumeSpecName "kube-api-access-h7jnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.339869 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7jnx\" (UniqueName: \"kubernetes.io/projected/982031e7-7d11-48ed-9fc8-377b935f80ce-kube-api-access-h7jnx\") on node \"crc\" DevicePath \"\"" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.339912 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982031e7-7d11-48ed-9fc8-377b935f80ce-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.347892 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982031e7-7d11-48ed-9fc8-377b935f80ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "982031e7-7d11-48ed-9fc8-377b935f80ce" (UID: "982031e7-7d11-48ed-9fc8-377b935f80ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.442258 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982031e7-7d11-48ed-9fc8-377b935f80ce-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.586373 4772 generic.go:334] "Generic (PLEG): container finished" podID="982031e7-7d11-48ed-9fc8-377b935f80ce" containerID="1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef" exitCode=0 Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.586423 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzw8t" event={"ID":"982031e7-7d11-48ed-9fc8-377b935f80ce","Type":"ContainerDied","Data":"1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef"} Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.586453 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzw8t" event={"ID":"982031e7-7d11-48ed-9fc8-377b935f80ce","Type":"ContainerDied","Data":"cb1f803cccfc80811bd62eaa56c083c4496712182fef1ed324b3e73edadcfa97"} Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.586470 4772 scope.go:117] "RemoveContainer" containerID="1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.586467 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzw8t" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.622540 4772 scope.go:117] "RemoveContainer" containerID="d815be2c2262097b7b308577de2d8ad32c34bc1b58af8f278dc94d1cc2efa328" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.631234 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tzw8t"] Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.644800 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tzw8t"] Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.660264 4772 scope.go:117] "RemoveContainer" containerID="6a71cf0caf7bb6508a786b3f10806854e77f574e08a4c837799be4ca67c8ef35" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.697952 4772 scope.go:117] "RemoveContainer" containerID="1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef" Nov 28 11:32:55 crc kubenswrapper[4772]: E1128 11:32:55.699790 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef\": container with ID starting with 1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef not found: ID does not exist" containerID="1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.699819 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef"} err="failed to get container status \"1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef\": rpc error: code = NotFound desc = could not find container \"1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef\": container with ID starting with 1f594f206714f8549b3229bc782380878b5dd329f49c77f6f04f90da0f0c58ef not found: ID does not exist" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.699842 4772 scope.go:117] "RemoveContainer" containerID="d815be2c2262097b7b308577de2d8ad32c34bc1b58af8f278dc94d1cc2efa328" Nov 28 11:32:55 crc kubenswrapper[4772]: E1128 11:32:55.700131 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d815be2c2262097b7b308577de2d8ad32c34bc1b58af8f278dc94d1cc2efa328\": container with ID starting with d815be2c2262097b7b308577de2d8ad32c34bc1b58af8f278dc94d1cc2efa328 not found: ID does not exist" containerID="d815be2c2262097b7b308577de2d8ad32c34bc1b58af8f278dc94d1cc2efa328" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.700155 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d815be2c2262097b7b308577de2d8ad32c34bc1b58af8f278dc94d1cc2efa328"} err="failed to get container status \"d815be2c2262097b7b308577de2d8ad32c34bc1b58af8f278dc94d1cc2efa328\": rpc error: code = NotFound desc = could not find container \"d815be2c2262097b7b308577de2d8ad32c34bc1b58af8f278dc94d1cc2efa328\": container with ID starting with d815be2c2262097b7b308577de2d8ad32c34bc1b58af8f278dc94d1cc2efa328 not found: ID does not exist" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.700170 4772 scope.go:117] "RemoveContainer" containerID="6a71cf0caf7bb6508a786b3f10806854e77f574e08a4c837799be4ca67c8ef35" Nov 28 11:32:55 crc kubenswrapper[4772]: E1128 11:32:55.700549 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a71cf0caf7bb6508a786b3f10806854e77f574e08a4c837799be4ca67c8ef35\": container with ID starting with 6a71cf0caf7bb6508a786b3f10806854e77f574e08a4c837799be4ca67c8ef35 not found: ID does not exist" containerID="6a71cf0caf7bb6508a786b3f10806854e77f574e08a4c837799be4ca67c8ef35" Nov 28 11:32:55 crc kubenswrapper[4772]: I1128 11:32:55.700670 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a71cf0caf7bb6508a786b3f10806854e77f574e08a4c837799be4ca67c8ef35"} err="failed to get container status \"6a71cf0caf7bb6508a786b3f10806854e77f574e08a4c837799be4ca67c8ef35\": rpc error: code = NotFound desc = could not find container \"6a71cf0caf7bb6508a786b3f10806854e77f574e08a4c837799be4ca67c8ef35\": container with ID starting with 6a71cf0caf7bb6508a786b3f10806854e77f574e08a4c837799be4ca67c8ef35 not found: ID does not exist" Nov 28 11:32:56 crc kubenswrapper[4772]: I1128 11:32:56.005455 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="982031e7-7d11-48ed-9fc8-377b935f80ce" path="/var/lib/kubelet/pods/982031e7-7d11-48ed-9fc8-377b935f80ce/volumes" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.736886 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hg745"] Nov 28 11:33:11 crc kubenswrapper[4772]: E1128 11:33:11.738390 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982031e7-7d11-48ed-9fc8-377b935f80ce" containerName="extract-utilities" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.738411 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="982031e7-7d11-48ed-9fc8-377b935f80ce" containerName="extract-utilities" Nov 28 11:33:11 crc kubenswrapper[4772]: E1128 11:33:11.738438 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982031e7-7d11-48ed-9fc8-377b935f80ce" containerName="registry-server" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.738446 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="982031e7-7d11-48ed-9fc8-377b935f80ce" containerName="registry-server" Nov 28 11:33:11 crc kubenswrapper[4772]: E1128 11:33:11.738475 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982031e7-7d11-48ed-9fc8-377b935f80ce" containerName="extract-content" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.738484 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="982031e7-7d11-48ed-9fc8-377b935f80ce" containerName="extract-content" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.738737 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="982031e7-7d11-48ed-9fc8-377b935f80ce" containerName="registry-server" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.740774 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.754619 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg745"] Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.819389 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6111b7a1-69cb-4375-8bf5-3c74997aca74-utilities\") pod \"redhat-marketplace-hg745\" (UID: \"6111b7a1-69cb-4375-8bf5-3c74997aca74\") " pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.819503 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5qgx\" (UniqueName: \"kubernetes.io/projected/6111b7a1-69cb-4375-8bf5-3c74997aca74-kube-api-access-h5qgx\") pod \"redhat-marketplace-hg745\" (UID: \"6111b7a1-69cb-4375-8bf5-3c74997aca74\") " pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.819618 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6111b7a1-69cb-4375-8bf5-3c74997aca74-catalog-content\") pod \"redhat-marketplace-hg745\" (UID: \"6111b7a1-69cb-4375-8bf5-3c74997aca74\") " pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.921809 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5qgx\" (UniqueName: \"kubernetes.io/projected/6111b7a1-69cb-4375-8bf5-3c74997aca74-kube-api-access-h5qgx\") pod \"redhat-marketplace-hg745\" (UID: \"6111b7a1-69cb-4375-8bf5-3c74997aca74\") " pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.921909 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6111b7a1-69cb-4375-8bf5-3c74997aca74-catalog-content\") pod \"redhat-marketplace-hg745\" (UID: \"6111b7a1-69cb-4375-8bf5-3c74997aca74\") " pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.922029 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6111b7a1-69cb-4375-8bf5-3c74997aca74-utilities\") pod \"redhat-marketplace-hg745\" (UID: \"6111b7a1-69cb-4375-8bf5-3c74997aca74\") " pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.922648 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6111b7a1-69cb-4375-8bf5-3c74997aca74-utilities\") pod \"redhat-marketplace-hg745\" (UID: \"6111b7a1-69cb-4375-8bf5-3c74997aca74\") " pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.922756 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6111b7a1-69cb-4375-8bf5-3c74997aca74-catalog-content\") pod \"redhat-marketplace-hg745\" (UID: \"6111b7a1-69cb-4375-8bf5-3c74997aca74\") " pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:11 crc kubenswrapper[4772]: I1128 11:33:11.944679 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5qgx\" (UniqueName: \"kubernetes.io/projected/6111b7a1-69cb-4375-8bf5-3c74997aca74-kube-api-access-h5qgx\") pod \"redhat-marketplace-hg745\" (UID: \"6111b7a1-69cb-4375-8bf5-3c74997aca74\") " pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:12 crc kubenswrapper[4772]: I1128 11:33:12.115512 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:12 crc kubenswrapper[4772]: I1128 11:33:12.610033 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg745"] Nov 28 11:33:12 crc kubenswrapper[4772]: I1128 11:33:12.856226 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg745" event={"ID":"6111b7a1-69cb-4375-8bf5-3c74997aca74","Type":"ContainerStarted","Data":"18276366e22094cf0f8c410fcdf255720fc99daf7044a20470fe549ca1d6f555"} Nov 28 11:33:12 crc kubenswrapper[4772]: I1128 11:33:12.856726 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg745" event={"ID":"6111b7a1-69cb-4375-8bf5-3c74997aca74","Type":"ContainerStarted","Data":"726cec9b80e8daa965f41f1bb39a710ea5d7e4bcb5f5b018b2182f4d24f5d4f4"} Nov 28 11:33:13 crc kubenswrapper[4772]: I1128 11:33:13.869338 4772 generic.go:334] "Generic (PLEG): container finished" podID="6111b7a1-69cb-4375-8bf5-3c74997aca74" containerID="18276366e22094cf0f8c410fcdf255720fc99daf7044a20470fe549ca1d6f555" exitCode=0 Nov 28 11:33:13 crc kubenswrapper[4772]: I1128 11:33:13.869464 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg745" event={"ID":"6111b7a1-69cb-4375-8bf5-3c74997aca74","Type":"ContainerDied","Data":"18276366e22094cf0f8c410fcdf255720fc99daf7044a20470fe549ca1d6f555"} Nov 28 11:33:16 crc kubenswrapper[4772]: I1128 11:33:16.905606 4772 generic.go:334] "Generic (PLEG): container finished" podID="6111b7a1-69cb-4375-8bf5-3c74997aca74" containerID="a443ce58e3f9653e7e4656f90bffc9c3a2bcc68214d95c62f34b3322a875df53" exitCode=0 Nov 28 11:33:16 crc kubenswrapper[4772]: I1128 11:33:16.905740 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg745" event={"ID":"6111b7a1-69cb-4375-8bf5-3c74997aca74","Type":"ContainerDied","Data":"a443ce58e3f9653e7e4656f90bffc9c3a2bcc68214d95c62f34b3322a875df53"} Nov 28 11:33:17 crc kubenswrapper[4772]: I1128 11:33:17.920689 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg745" event={"ID":"6111b7a1-69cb-4375-8bf5-3c74997aca74","Type":"ContainerStarted","Data":"b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb"} Nov 28 11:33:17 crc kubenswrapper[4772]: I1128 11:33:17.949444 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hg745" podStartSLOduration=3.479213881 podStartE2EDuration="6.949424981s" podCreationTimestamp="2025-11-28 11:33:11 +0000 UTC" firstStartedPulling="2025-11-28 11:33:13.87349686 +0000 UTC m=+1592.196740087" lastFinishedPulling="2025-11-28 11:33:17.34370794 +0000 UTC m=+1595.666951187" observedRunningTime="2025-11-28 11:33:17.94129107 +0000 UTC m=+1596.264534367" watchObservedRunningTime="2025-11-28 11:33:17.949424981 +0000 UTC m=+1596.272668208" Nov 28 11:33:22 crc kubenswrapper[4772]: I1128 11:33:22.115669 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:22 crc kubenswrapper[4772]: I1128 11:33:22.116607 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:22 crc kubenswrapper[4772]: I1128 11:33:22.170950 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:23 crc kubenswrapper[4772]: I1128 11:33:23.031266 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:23 crc kubenswrapper[4772]: I1128 11:33:23.082726 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg745"] Nov 28 11:33:24 crc kubenswrapper[4772]: I1128 11:33:24.992704 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hg745" podUID="6111b7a1-69cb-4375-8bf5-3c74997aca74" containerName="registry-server" containerID="cri-o://b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb" gracePeriod=2 Nov 28 11:33:25 crc kubenswrapper[4772]: I1128 11:33:25.509391 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:25 crc kubenswrapper[4772]: I1128 11:33:25.557276 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5qgx\" (UniqueName: \"kubernetes.io/projected/6111b7a1-69cb-4375-8bf5-3c74997aca74-kube-api-access-h5qgx\") pod \"6111b7a1-69cb-4375-8bf5-3c74997aca74\" (UID: \"6111b7a1-69cb-4375-8bf5-3c74997aca74\") " Nov 28 11:33:25 crc kubenswrapper[4772]: I1128 11:33:25.557675 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6111b7a1-69cb-4375-8bf5-3c74997aca74-utilities\") pod \"6111b7a1-69cb-4375-8bf5-3c74997aca74\" (UID: \"6111b7a1-69cb-4375-8bf5-3c74997aca74\") " Nov 28 11:33:25 crc kubenswrapper[4772]: I1128 11:33:25.557740 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6111b7a1-69cb-4375-8bf5-3c74997aca74-catalog-content\") pod \"6111b7a1-69cb-4375-8bf5-3c74997aca74\" (UID: \"6111b7a1-69cb-4375-8bf5-3c74997aca74\") " Nov 28 11:33:25 crc kubenswrapper[4772]: I1128 11:33:25.559026 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6111b7a1-69cb-4375-8bf5-3c74997aca74-utilities" (OuterVolumeSpecName: "utilities") pod "6111b7a1-69cb-4375-8bf5-3c74997aca74" (UID: "6111b7a1-69cb-4375-8bf5-3c74997aca74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:33:25 crc kubenswrapper[4772]: I1128 11:33:25.564119 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6111b7a1-69cb-4375-8bf5-3c74997aca74-kube-api-access-h5qgx" (OuterVolumeSpecName: "kube-api-access-h5qgx") pod "6111b7a1-69cb-4375-8bf5-3c74997aca74" (UID: "6111b7a1-69cb-4375-8bf5-3c74997aca74"). InnerVolumeSpecName "kube-api-access-h5qgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:33:25 crc kubenswrapper[4772]: I1128 11:33:25.577973 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6111b7a1-69cb-4375-8bf5-3c74997aca74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6111b7a1-69cb-4375-8bf5-3c74997aca74" (UID: "6111b7a1-69cb-4375-8bf5-3c74997aca74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:33:25 crc kubenswrapper[4772]: I1128 11:33:25.659888 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6111b7a1-69cb-4375-8bf5-3c74997aca74-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:33:25 crc kubenswrapper[4772]: I1128 11:33:25.660266 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6111b7a1-69cb-4375-8bf5-3c74997aca74-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:33:25 crc kubenswrapper[4772]: I1128 11:33:25.660280 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5qgx\" (UniqueName: \"kubernetes.io/projected/6111b7a1-69cb-4375-8bf5-3c74997aca74-kube-api-access-h5qgx\") on node \"crc\" DevicePath \"\"" Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.015625 4772 generic.go:334] "Generic (PLEG): container finished" podID="6111b7a1-69cb-4375-8bf5-3c74997aca74" containerID="b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb" exitCode=0 Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.015785 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hg745" Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.019760 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg745" event={"ID":"6111b7a1-69cb-4375-8bf5-3c74997aca74","Type":"ContainerDied","Data":"b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb"} Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.019837 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg745" event={"ID":"6111b7a1-69cb-4375-8bf5-3c74997aca74","Type":"ContainerDied","Data":"726cec9b80e8daa965f41f1bb39a710ea5d7e4bcb5f5b018b2182f4d24f5d4f4"} Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.019871 4772 scope.go:117] "RemoveContainer" containerID="b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb" Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.075848 4772 scope.go:117] "RemoveContainer" containerID="a443ce58e3f9653e7e4656f90bffc9c3a2bcc68214d95c62f34b3322a875df53" Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.091685 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg745"] Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.103708 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg745"] Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.124404 4772 scope.go:117] "RemoveContainer" containerID="18276366e22094cf0f8c410fcdf255720fc99daf7044a20470fe549ca1d6f555" Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.154347 4772 scope.go:117] "RemoveContainer" containerID="b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb" Nov 28 11:33:26 crc kubenswrapper[4772]: E1128 11:33:26.155116 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb\": container with ID starting with b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb not found: ID does not exist" containerID="b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb" Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.155169 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb"} err="failed to get container status \"b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb\": rpc error: code = NotFound desc = could not find container \"b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb\": container with ID starting with b09d56cc80b75b819940ec6546f129269a29dd057655d88d5a660f32e470e0bb not found: ID does not exist" Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.155203 4772 scope.go:117] "RemoveContainer" containerID="a443ce58e3f9653e7e4656f90bffc9c3a2bcc68214d95c62f34b3322a875df53" Nov 28 11:33:26 crc kubenswrapper[4772]: E1128 11:33:26.155889 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a443ce58e3f9653e7e4656f90bffc9c3a2bcc68214d95c62f34b3322a875df53\": container with ID starting with a443ce58e3f9653e7e4656f90bffc9c3a2bcc68214d95c62f34b3322a875df53 not found: ID does not exist" containerID="a443ce58e3f9653e7e4656f90bffc9c3a2bcc68214d95c62f34b3322a875df53" Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.155952 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a443ce58e3f9653e7e4656f90bffc9c3a2bcc68214d95c62f34b3322a875df53"} err="failed to get container status \"a443ce58e3f9653e7e4656f90bffc9c3a2bcc68214d95c62f34b3322a875df53\": rpc error: code = NotFound desc = could not find container \"a443ce58e3f9653e7e4656f90bffc9c3a2bcc68214d95c62f34b3322a875df53\": container with ID starting with a443ce58e3f9653e7e4656f90bffc9c3a2bcc68214d95c62f34b3322a875df53 not found: ID does not exist" Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.155987 4772 scope.go:117] "RemoveContainer" containerID="18276366e22094cf0f8c410fcdf255720fc99daf7044a20470fe549ca1d6f555" Nov 28 11:33:26 crc kubenswrapper[4772]: E1128 11:33:26.156596 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18276366e22094cf0f8c410fcdf255720fc99daf7044a20470fe549ca1d6f555\": container with ID starting with 18276366e22094cf0f8c410fcdf255720fc99daf7044a20470fe549ca1d6f555 not found: ID does not exist" containerID="18276366e22094cf0f8c410fcdf255720fc99daf7044a20470fe549ca1d6f555" Nov 28 11:33:26 crc kubenswrapper[4772]: I1128 11:33:26.156638 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18276366e22094cf0f8c410fcdf255720fc99daf7044a20470fe549ca1d6f555"} err="failed to get container status \"18276366e22094cf0f8c410fcdf255720fc99daf7044a20470fe549ca1d6f555\": rpc error: code = NotFound desc = could not find container \"18276366e22094cf0f8c410fcdf255720fc99daf7044a20470fe549ca1d6f555\": container with ID starting with 18276366e22094cf0f8c410fcdf255720fc99daf7044a20470fe549ca1d6f555 not found: ID does not exist" Nov 28 11:33:28 crc kubenswrapper[4772]: I1128 11:33:28.007861 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6111b7a1-69cb-4375-8bf5-3c74997aca74" path="/var/lib/kubelet/pods/6111b7a1-69cb-4375-8bf5-3c74997aca74/volumes" Nov 28 11:33:29 crc kubenswrapper[4772]: I1128 11:33:29.058099 4772 generic.go:334] "Generic (PLEG): container finished" podID="ab57579b-65be-4ef3-977f-574ca00f3d9a" containerID="c306a4f6eb37214a9c2d73d9a1bb70b065d10192cc9577a139fe0aed7a414b66" exitCode=0 Nov 28 11:33:29 crc kubenswrapper[4772]: I1128 11:33:29.058150 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" event={"ID":"ab57579b-65be-4ef3-977f-574ca00f3d9a","Type":"ContainerDied","Data":"c306a4f6eb37214a9c2d73d9a1bb70b065d10192cc9577a139fe0aed7a414b66"} Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.674529 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.787306 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-bootstrap-combined-ca-bundle\") pod \"ab57579b-65be-4ef3-977f-574ca00f3d9a\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.787452 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-ssh-key\") pod \"ab57579b-65be-4ef3-977f-574ca00f3d9a\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.787604 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd8q9\" (UniqueName: \"kubernetes.io/projected/ab57579b-65be-4ef3-977f-574ca00f3d9a-kube-api-access-gd8q9\") pod \"ab57579b-65be-4ef3-977f-574ca00f3d9a\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.787656 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-inventory\") pod \"ab57579b-65be-4ef3-977f-574ca00f3d9a\" (UID: \"ab57579b-65be-4ef3-977f-574ca00f3d9a\") " Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.798065 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab57579b-65be-4ef3-977f-574ca00f3d9a-kube-api-access-gd8q9" (OuterVolumeSpecName: "kube-api-access-gd8q9") pod "ab57579b-65be-4ef3-977f-574ca00f3d9a" (UID: "ab57579b-65be-4ef3-977f-574ca00f3d9a"). InnerVolumeSpecName "kube-api-access-gd8q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.799869 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ab57579b-65be-4ef3-977f-574ca00f3d9a" (UID: "ab57579b-65be-4ef3-977f-574ca00f3d9a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.824554 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-inventory" (OuterVolumeSpecName: "inventory") pod "ab57579b-65be-4ef3-977f-574ca00f3d9a" (UID: "ab57579b-65be-4ef3-977f-574ca00f3d9a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.830828 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ab57579b-65be-4ef3-977f-574ca00f3d9a" (UID: "ab57579b-65be-4ef3-977f-574ca00f3d9a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.890223 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd8q9\" (UniqueName: \"kubernetes.io/projected/ab57579b-65be-4ef3-977f-574ca00f3d9a-kube-api-access-gd8q9\") on node \"crc\" DevicePath \"\"" Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.890271 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.890283 4772 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:33:30 crc kubenswrapper[4772]: I1128 11:33:30.890294 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab57579b-65be-4ef3-977f-574ca00f3d9a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.097145 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" event={"ID":"ab57579b-65be-4ef3-977f-574ca00f3d9a","Type":"ContainerDied","Data":"4c09f022572452f97bc7c5286d1dce8da769b50c53c387e5b93a9ca47c998a73"} Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.097199 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.097210 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c09f022572452f97bc7c5286d1dce8da769b50c53c387e5b93a9ca47c998a73" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.213603 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x"] Nov 28 11:33:31 crc kubenswrapper[4772]: E1128 11:33:31.214301 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6111b7a1-69cb-4375-8bf5-3c74997aca74" containerName="extract-utilities" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.214337 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6111b7a1-69cb-4375-8bf5-3c74997aca74" containerName="extract-utilities" Nov 28 11:33:31 crc kubenswrapper[4772]: E1128 11:33:31.214400 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6111b7a1-69cb-4375-8bf5-3c74997aca74" containerName="registry-server" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.214419 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6111b7a1-69cb-4375-8bf5-3c74997aca74" containerName="registry-server" Nov 28 11:33:31 crc kubenswrapper[4772]: E1128 11:33:31.214460 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6111b7a1-69cb-4375-8bf5-3c74997aca74" containerName="extract-content" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.214473 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6111b7a1-69cb-4375-8bf5-3c74997aca74" containerName="extract-content" Nov 28 11:33:31 crc kubenswrapper[4772]: E1128 11:33:31.214495 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab57579b-65be-4ef3-977f-574ca00f3d9a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.214509 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab57579b-65be-4ef3-977f-574ca00f3d9a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.214852 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab57579b-65be-4ef3-977f-574ca00f3d9a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.214935 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6111b7a1-69cb-4375-8bf5-3c74997aca74" containerName="registry-server" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.216040 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.218413 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.218699 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.218933 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.220670 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.225709 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x"] Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.297954 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54t5b\" (UniqueName: \"kubernetes.io/projected/49998ed7-1acb-4812-af0d-822d07292334-kube-api-access-54t5b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dx99x\" (UID: \"49998ed7-1acb-4812-af0d-822d07292334\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.298403 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49998ed7-1acb-4812-af0d-822d07292334-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dx99x\" (UID: \"49998ed7-1acb-4812-af0d-822d07292334\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.298584 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49998ed7-1acb-4812-af0d-822d07292334-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dx99x\" (UID: \"49998ed7-1acb-4812-af0d-822d07292334\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.401973 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49998ed7-1acb-4812-af0d-822d07292334-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dx99x\" (UID: \"49998ed7-1acb-4812-af0d-822d07292334\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.402182 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49998ed7-1acb-4812-af0d-822d07292334-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dx99x\" (UID: \"49998ed7-1acb-4812-af0d-822d07292334\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.402327 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54t5b\" (UniqueName: \"kubernetes.io/projected/49998ed7-1acb-4812-af0d-822d07292334-kube-api-access-54t5b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dx99x\" (UID: \"49998ed7-1acb-4812-af0d-822d07292334\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.407395 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49998ed7-1acb-4812-af0d-822d07292334-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dx99x\" (UID: \"49998ed7-1acb-4812-af0d-822d07292334\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.407725 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49998ed7-1acb-4812-af0d-822d07292334-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dx99x\" (UID: \"49998ed7-1acb-4812-af0d-822d07292334\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.419342 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54t5b\" (UniqueName: \"kubernetes.io/projected/49998ed7-1acb-4812-af0d-822d07292334-kube-api-access-54t5b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dx99x\" (UID: \"49998ed7-1acb-4812-af0d-822d07292334\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:33:31 crc kubenswrapper[4772]: I1128 11:33:31.551441 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:33:32 crc kubenswrapper[4772]: I1128 11:33:32.249063 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x"] Nov 28 11:33:32 crc kubenswrapper[4772]: W1128 11:33:32.258016 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49998ed7_1acb_4812_af0d_822d07292334.slice/crio-c68cb7b50c2530aa6c671047b9b9333035fd3a856eb8ea7cd4c201b987bbdfcb WatchSource:0}: Error finding container c68cb7b50c2530aa6c671047b9b9333035fd3a856eb8ea7cd4c201b987bbdfcb: Status 404 returned error can't find the container with id c68cb7b50c2530aa6c671047b9b9333035fd3a856eb8ea7cd4c201b987bbdfcb Nov 28 11:33:33 crc kubenswrapper[4772]: I1128 11:33:33.122051 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" event={"ID":"49998ed7-1acb-4812-af0d-822d07292334","Type":"ContainerStarted","Data":"b5ef48daff5676d4b9617e0659dedbb71c9e78b06360facb0e2f9805dc46a69a"} Nov 28 11:33:33 crc kubenswrapper[4772]: I1128 11:33:33.122648 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" event={"ID":"49998ed7-1acb-4812-af0d-822d07292334","Type":"ContainerStarted","Data":"c68cb7b50c2530aa6c671047b9b9333035fd3a856eb8ea7cd4c201b987bbdfcb"} Nov 28 11:33:33 crc kubenswrapper[4772]: I1128 11:33:33.147216 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" podStartSLOduration=1.696402376 podStartE2EDuration="2.147199339s" podCreationTimestamp="2025-11-28 11:33:31 +0000 UTC" firstStartedPulling="2025-11-28 11:33:32.26210562 +0000 UTC m=+1610.585348847" lastFinishedPulling="2025-11-28 11:33:32.712902563 +0000 UTC m=+1611.036145810" observedRunningTime="2025-11-28 11:33:33.140723063 +0000 UTC m=+1611.463966290" watchObservedRunningTime="2025-11-28 11:33:33.147199339 +0000 UTC m=+1611.470442566" Nov 28 11:33:40 crc kubenswrapper[4772]: I1128 11:33:40.081600 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-kjrfj"] Nov 28 11:33:40 crc kubenswrapper[4772]: I1128 11:33:40.099783 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-kjrfj"] Nov 28 11:33:41 crc kubenswrapper[4772]: I1128 11:33:41.081694 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c5b4-account-create-update-n4f5g"] Nov 28 11:33:41 crc kubenswrapper[4772]: I1128 11:33:41.096843 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-74a7-account-create-update-q7d68"] Nov 28 11:33:41 crc kubenswrapper[4772]: I1128 11:33:41.108715 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zmxkp"] Nov 28 11:33:41 crc kubenswrapper[4772]: I1128 11:33:41.120354 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c5b4-account-create-update-n4f5g"] Nov 28 11:33:41 crc kubenswrapper[4772]: I1128 11:33:41.132512 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-74a7-account-create-update-q7d68"] Nov 28 11:33:41 crc kubenswrapper[4772]: I1128 11:33:41.140871 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zmxkp"] Nov 28 11:33:42 crc kubenswrapper[4772]: I1128 11:33:42.019141 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b737be1-c629-4f60-9fb5-6102b6ab5cc0" path="/var/lib/kubelet/pods/5b737be1-c629-4f60-9fb5-6102b6ab5cc0/volumes" Nov 28 11:33:42 crc kubenswrapper[4772]: I1128 11:33:42.020337 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c32df24-75af-46c6-bc3a-defed81bd9e0" path="/var/lib/kubelet/pods/6c32df24-75af-46c6-bc3a-defed81bd9e0/volumes" Nov 28 11:33:42 crc kubenswrapper[4772]: I1128 11:33:42.021152 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76af22b2-2887-4353-9228-25c97bd23c28" path="/var/lib/kubelet/pods/76af22b2-2887-4353-9228-25c97bd23c28/volumes" Nov 28 11:33:42 crc kubenswrapper[4772]: I1128 11:33:42.021937 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2a5e9d0-5893-460f-8f77-f896d57f515c" path="/var/lib/kubelet/pods/b2a5e9d0-5893-460f-8f77-f896d57f515c/volumes" Nov 28 11:33:44 crc kubenswrapper[4772]: I1128 11:33:44.631549 4772 scope.go:117] "RemoveContainer" containerID="5d55adb86d8d9478012963d820b62558f92c5b502d94d177717c5a090370f602" Nov 28 11:33:44 crc kubenswrapper[4772]: I1128 11:33:44.669229 4772 scope.go:117] "RemoveContainer" containerID="c04272903dc408a8ec05e11e76c28ff87917001ad9523fb9c63a5a87d2e269ff" Nov 28 11:33:44 crc kubenswrapper[4772]: I1128 11:33:44.757323 4772 scope.go:117] "RemoveContainer" containerID="a9a14535067a89aabae9a07a65f4e6f0e19c4b7912fec3c49c31657810166d15" Nov 28 11:33:44 crc kubenswrapper[4772]: I1128 11:33:44.824470 4772 scope.go:117] "RemoveContainer" containerID="2a2e85fc0331bc2ea8c17dee3836751026aa26d07b3b13b5c68d7a4948b3611a" Nov 28 11:33:49 crc kubenswrapper[4772]: I1128 11:33:49.051500 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2145-account-create-update-5r2gh"] Nov 28 11:33:49 crc kubenswrapper[4772]: I1128 11:33:49.073417 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2145-account-create-update-5r2gh"] Nov 28 11:33:49 crc kubenswrapper[4772]: I1128 11:33:49.082336 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dgrqq"] Nov 28 11:33:49 crc kubenswrapper[4772]: I1128 11:33:49.089998 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dgrqq"] Nov 28 11:33:50 crc kubenswrapper[4772]: I1128 11:33:50.007836 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49695051-bd3b-4650-8ac7-298d77ffd567" path="/var/lib/kubelet/pods/49695051-bd3b-4650-8ac7-298d77ffd567/volumes" Nov 28 11:33:50 crc kubenswrapper[4772]: I1128 11:33:50.008469 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c0bb97-5fda-4dad-a178-1b336bf97c74" path="/var/lib/kubelet/pods/e7c0bb97-5fda-4dad-a178-1b336bf97c74/volumes" Nov 28 11:33:53 crc kubenswrapper[4772]: I1128 11:33:53.896319 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:33:53 crc kubenswrapper[4772]: I1128 11:33:53.897182 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:34:23 crc kubenswrapper[4772]: I1128 11:34:23.895870 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:34:23 crc kubenswrapper[4772]: I1128 11:34:23.897129 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:34:24 crc kubenswrapper[4772]: I1128 11:34:24.070313 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-r6s87"] Nov 28 11:34:24 crc kubenswrapper[4772]: I1128 11:34:24.080924 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-r6s87"] Nov 28 11:34:26 crc kubenswrapper[4772]: I1128 11:34:26.009233 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09675021-323d-41ee-aaaa-bee4e83e2544" path="/var/lib/kubelet/pods/09675021-323d-41ee-aaaa-bee4e83e2544/volumes" Nov 28 11:34:38 crc kubenswrapper[4772]: I1128 11:34:38.047793 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-001d-account-create-update-4jcc5"] Nov 28 11:34:38 crc kubenswrapper[4772]: I1128 11:34:38.064481 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-001d-account-create-update-4jcc5"] Nov 28 11:34:40 crc kubenswrapper[4772]: I1128 11:34:40.009777 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b99c928-e317-4266-a09f-5e3a2d0eb4b5" path="/var/lib/kubelet/pods/4b99c928-e317-4266-a09f-5e3a2d0eb4b5/volumes" Nov 28 11:34:41 crc kubenswrapper[4772]: I1128 11:34:41.028581 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-qnrnt"] Nov 28 11:34:41 crc kubenswrapper[4772]: I1128 11:34:41.039347 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4e7b-account-create-update-nd78w"] Nov 28 11:34:41 crc kubenswrapper[4772]: I1128 11:34:41.049139 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4e7b-account-create-update-nd78w"] Nov 28 11:34:41 crc kubenswrapper[4772]: I1128 11:34:41.056616 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-2plvb"] Nov 28 11:34:41 crc kubenswrapper[4772]: I1128 11:34:41.063957 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0617-account-create-update-vwt74"] Nov 28 11:34:41 crc kubenswrapper[4772]: I1128 11:34:41.070800 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-r2v2z"] Nov 28 11:34:41 crc kubenswrapper[4772]: I1128 11:34:41.078787 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-qnrnt"] Nov 28 11:34:41 crc kubenswrapper[4772]: I1128 11:34:41.087906 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0617-account-create-update-vwt74"] Nov 28 11:34:41 crc kubenswrapper[4772]: I1128 11:34:41.097567 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-r2v2z"] Nov 28 11:34:41 crc kubenswrapper[4772]: I1128 11:34:41.105546 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-2plvb"] Nov 28 11:34:42 crc kubenswrapper[4772]: I1128 11:34:42.012412 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1318204d-e75b-4ba8-b804-298eded6f129" path="/var/lib/kubelet/pods/1318204d-e75b-4ba8-b804-298eded6f129/volumes" Nov 28 11:34:42 crc kubenswrapper[4772]: I1128 11:34:42.014121 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b4efdbb-8b77-4889-be40-c74e7eac7392" path="/var/lib/kubelet/pods/3b4efdbb-8b77-4889-be40-c74e7eac7392/volumes" Nov 28 11:34:42 crc kubenswrapper[4772]: I1128 11:34:42.015431 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a4b747-c5a6-41c0-86cf-895a5fff2470" path="/var/lib/kubelet/pods/75a4b747-c5a6-41c0-86cf-895a5fff2470/volumes" Nov 28 11:34:42 crc kubenswrapper[4772]: I1128 11:34:42.016749 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaf64ee3-3552-488c-9dff-15dc31783f22" path="/var/lib/kubelet/pods/eaf64ee3-3552-488c-9dff-15dc31783f22/volumes" Nov 28 11:34:42 crc kubenswrapper[4772]: I1128 11:34:42.019527 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0c2253-60c9-48a0-ab78-8a63f736b36f" path="/var/lib/kubelet/pods/fa0c2253-60c9-48a0-ab78-8a63f736b36f/volumes" Nov 28 11:34:45 crc kubenswrapper[4772]: I1128 11:34:45.011376 4772 scope.go:117] "RemoveContainer" containerID="dbd0913af328e934d9264548f5550d72f9170bcb4051297c6714b11e801592d9" Nov 28 11:34:45 crc kubenswrapper[4772]: I1128 11:34:45.048076 4772 scope.go:117] "RemoveContainer" containerID="6146a10f8f44980196d83eeaac6f5422ff80a5ac6e655bd081c32e8d0722c600" Nov 28 11:34:45 crc kubenswrapper[4772]: I1128 11:34:45.097277 4772 scope.go:117] "RemoveContainer" containerID="81ca325c59ac763671d22b789a19f04c989f66fd290244f2e41ab1903d6eb651" Nov 28 11:34:45 crc kubenswrapper[4772]: I1128 11:34:45.163599 4772 scope.go:117] "RemoveContainer" containerID="4b4fb445aa7d16168906f40cde0c3a491f8431f39d5008ab6b92fb94242f99fa" Nov 28 11:34:45 crc kubenswrapper[4772]: I1128 11:34:45.195109 4772 scope.go:117] "RemoveContainer" containerID="6e1b8590bc271c8ed2abc0c8918619045fdfacff933981cf83b8f057e707c316" Nov 28 11:34:45 crc kubenswrapper[4772]: I1128 11:34:45.246156 4772 scope.go:117] "RemoveContainer" containerID="87aee1b929be7af24244766c2f9dd1484218f1fec758409170c0654e4cd90347" Nov 28 11:34:45 crc kubenswrapper[4772]: I1128 11:34:45.283340 4772 scope.go:117] "RemoveContainer" containerID="ffe0b40929021d0e4c16cb9e25cf459c2b460e76b1b15f02f8b43f6e8ab6d00d" Nov 28 11:34:45 crc kubenswrapper[4772]: I1128 11:34:45.307113 4772 scope.go:117] "RemoveContainer" containerID="fd949075fc1212b789138e63ce1e1be6b059582793649a66619fdeb402308e3b" Nov 28 11:34:45 crc kubenswrapper[4772]: I1128 11:34:45.327154 4772 scope.go:117] "RemoveContainer" containerID="0b3876f39bdd78130314285e169be6cb969844d2429e4ee9989ad80b993a8893" Nov 28 11:34:46 crc kubenswrapper[4772]: I1128 11:34:46.043303 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-86jrn"] Nov 28 11:34:46 crc kubenswrapper[4772]: I1128 11:34:46.053111 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-86jrn"] Nov 28 11:34:48 crc kubenswrapper[4772]: I1128 11:34:48.010212 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="806ae6be-af8d-492c-8556-797040276b12" path="/var/lib/kubelet/pods/806ae6be-af8d-492c-8556-797040276b12/volumes" Nov 28 11:34:53 crc kubenswrapper[4772]: I1128 11:34:53.896639 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:34:53 crc kubenswrapper[4772]: I1128 11:34:53.897345 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:34:53 crc kubenswrapper[4772]: I1128 11:34:53.897552 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:34:53 crc kubenswrapper[4772]: I1128 11:34:53.898674 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:34:53 crc kubenswrapper[4772]: I1128 11:34:53.898923 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" gracePeriod=600 Nov 28 11:34:54 crc kubenswrapper[4772]: E1128 11:34:54.529055 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:34:55 crc kubenswrapper[4772]: I1128 11:34:55.161244 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" exitCode=0 Nov 28 11:34:55 crc kubenswrapper[4772]: I1128 11:34:55.161328 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f"} Nov 28 11:34:55 crc kubenswrapper[4772]: I1128 11:34:55.161931 4772 scope.go:117] "RemoveContainer" containerID="4940b44c04e1b9bb359aa7bb4ce020bd708399be9f1195fe75c4b24f11ffd061" Nov 28 11:34:55 crc kubenswrapper[4772]: I1128 11:34:55.163221 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:34:55 crc kubenswrapper[4772]: E1128 11:34:55.163595 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:35:08 crc kubenswrapper[4772]: I1128 11:35:08.994897 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:35:08 crc kubenswrapper[4772]: E1128 11:35:08.995974 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:35:09 crc kubenswrapper[4772]: I1128 11:35:09.341594 4772 generic.go:334] "Generic (PLEG): container finished" podID="49998ed7-1acb-4812-af0d-822d07292334" containerID="b5ef48daff5676d4b9617e0659dedbb71c9e78b06360facb0e2f9805dc46a69a" exitCode=0 Nov 28 11:35:09 crc kubenswrapper[4772]: I1128 11:35:09.341642 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" event={"ID":"49998ed7-1acb-4812-af0d-822d07292334","Type":"ContainerDied","Data":"b5ef48daff5676d4b9617e0659dedbb71c9e78b06360facb0e2f9805dc46a69a"} Nov 28 11:35:10 crc kubenswrapper[4772]: I1128 11:35:10.911055 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.000419 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54t5b\" (UniqueName: \"kubernetes.io/projected/49998ed7-1acb-4812-af0d-822d07292334-kube-api-access-54t5b\") pod \"49998ed7-1acb-4812-af0d-822d07292334\" (UID: \"49998ed7-1acb-4812-af0d-822d07292334\") " Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.001926 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49998ed7-1acb-4812-af0d-822d07292334-inventory\") pod \"49998ed7-1acb-4812-af0d-822d07292334\" (UID: \"49998ed7-1acb-4812-af0d-822d07292334\") " Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.002081 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49998ed7-1acb-4812-af0d-822d07292334-ssh-key\") pod \"49998ed7-1acb-4812-af0d-822d07292334\" (UID: \"49998ed7-1acb-4812-af0d-822d07292334\") " Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.008964 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49998ed7-1acb-4812-af0d-822d07292334-kube-api-access-54t5b" (OuterVolumeSpecName: "kube-api-access-54t5b") pod "49998ed7-1acb-4812-af0d-822d07292334" (UID: "49998ed7-1acb-4812-af0d-822d07292334"). InnerVolumeSpecName "kube-api-access-54t5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.041618 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49998ed7-1acb-4812-af0d-822d07292334-inventory" (OuterVolumeSpecName: "inventory") pod "49998ed7-1acb-4812-af0d-822d07292334" (UID: "49998ed7-1acb-4812-af0d-822d07292334"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.058167 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49998ed7-1acb-4812-af0d-822d07292334-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "49998ed7-1acb-4812-af0d-822d07292334" (UID: "49998ed7-1acb-4812-af0d-822d07292334"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.105081 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54t5b\" (UniqueName: \"kubernetes.io/projected/49998ed7-1acb-4812-af0d-822d07292334-kube-api-access-54t5b\") on node \"crc\" DevicePath \"\"" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.105497 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49998ed7-1acb-4812-af0d-822d07292334-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.105520 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49998ed7-1acb-4812-af0d-822d07292334-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.371620 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" event={"ID":"49998ed7-1acb-4812-af0d-822d07292334","Type":"ContainerDied","Data":"c68cb7b50c2530aa6c671047b9b9333035fd3a856eb8ea7cd4c201b987bbdfcb"} Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.371702 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c68cb7b50c2530aa6c671047b9b9333035fd3a856eb8ea7cd4c201b987bbdfcb" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.371732 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dx99x" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.502973 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7"] Nov 28 11:35:11 crc kubenswrapper[4772]: E1128 11:35:11.503864 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49998ed7-1acb-4812-af0d-822d07292334" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.503900 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="49998ed7-1acb-4812-af0d-822d07292334" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.504262 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="49998ed7-1acb-4812-af0d-822d07292334" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.507097 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.510402 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.510582 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.514156 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7"] Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.514897 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.517407 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.616991 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-68zj7\" (UID: \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.617140 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-68zj7\" (UID: \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.617456 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2g96\" (UniqueName: \"kubernetes.io/projected/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-kube-api-access-t2g96\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-68zj7\" (UID: \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.719571 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-68zj7\" (UID: \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.719820 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2g96\" (UniqueName: \"kubernetes.io/projected/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-kube-api-access-t2g96\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-68zj7\" (UID: \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.720045 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-68zj7\" (UID: \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.724767 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-68zj7\" (UID: \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.725983 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-68zj7\" (UID: \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.744288 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2g96\" (UniqueName: \"kubernetes.io/projected/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-kube-api-access-t2g96\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-68zj7\" (UID: \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:35:11 crc kubenswrapper[4772]: I1128 11:35:11.833959 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:35:12 crc kubenswrapper[4772]: W1128 11:35:12.405426 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cf12a49_cbf2_4721_92a0_e4f9f88deb0c.slice/crio-48593e53086667e82c7a04f0164b4773e79128a9105a7a0c87dd5eabfd51a1a2 WatchSource:0}: Error finding container 48593e53086667e82c7a04f0164b4773e79128a9105a7a0c87dd5eabfd51a1a2: Status 404 returned error can't find the container with id 48593e53086667e82c7a04f0164b4773e79128a9105a7a0c87dd5eabfd51a1a2 Nov 28 11:35:12 crc kubenswrapper[4772]: I1128 11:35:12.408759 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7"] Nov 28 11:35:13 crc kubenswrapper[4772]: I1128 11:35:13.392084 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" event={"ID":"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c","Type":"ContainerStarted","Data":"201693d26c2ee0ec7771de415a4bef372b69ce85b5c933366e69bab5373bebbd"} Nov 28 11:35:13 crc kubenswrapper[4772]: I1128 11:35:13.392647 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" event={"ID":"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c","Type":"ContainerStarted","Data":"48593e53086667e82c7a04f0164b4773e79128a9105a7a0c87dd5eabfd51a1a2"} Nov 28 11:35:19 crc kubenswrapper[4772]: I1128 11:35:19.998581 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:35:20 crc kubenswrapper[4772]: E1128 11:35:19.999576 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:35:20 crc kubenswrapper[4772]: I1128 11:35:20.051982 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" podStartSLOduration=8.573615908 podStartE2EDuration="9.051958153s" podCreationTimestamp="2025-11-28 11:35:11 +0000 UTC" firstStartedPulling="2025-11-28 11:35:12.408866969 +0000 UTC m=+1710.732110206" lastFinishedPulling="2025-11-28 11:35:12.887209214 +0000 UTC m=+1711.210452451" observedRunningTime="2025-11-28 11:35:13.418661244 +0000 UTC m=+1711.741904481" watchObservedRunningTime="2025-11-28 11:35:20.051958153 +0000 UTC m=+1718.375201380" Nov 28 11:35:20 crc kubenswrapper[4772]: I1128 11:35:20.052841 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-gkxww"] Nov 28 11:35:20 crc kubenswrapper[4772]: I1128 11:35:20.062732 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-gkxww"] Nov 28 11:35:22 crc kubenswrapper[4772]: I1128 11:35:22.006503 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c75297-c867-4c84-8183-239f47947895" path="/var/lib/kubelet/pods/31c75297-c867-4c84-8183-239f47947895/volumes" Nov 28 11:35:29 crc kubenswrapper[4772]: I1128 11:35:29.039592 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-76nzr"] Nov 28 11:35:29 crc kubenswrapper[4772]: I1128 11:35:29.051104 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-krcf2"] Nov 28 11:35:29 crc kubenswrapper[4772]: I1128 11:35:29.060971 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-krcf2"] Nov 28 11:35:29 crc kubenswrapper[4772]: I1128 11:35:29.069405 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-76nzr"] Nov 28 11:35:30 crc kubenswrapper[4772]: I1128 11:35:30.004908 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d1f86f7-529a-4ed1-885d-1beb4c14b213" path="/var/lib/kubelet/pods/3d1f86f7-529a-4ed1-885d-1beb4c14b213/volumes" Nov 28 11:35:30 crc kubenswrapper[4772]: I1128 11:35:30.005923 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e46680be-1091-4b51-a858-2365af24a086" path="/var/lib/kubelet/pods/e46680be-1091-4b51-a858-2365af24a086/volumes" Nov 28 11:35:33 crc kubenswrapper[4772]: I1128 11:35:33.995500 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:35:33 crc kubenswrapper[4772]: E1128 11:35:33.996580 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:35:35 crc kubenswrapper[4772]: E1128 11:35:35.963514 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Nov 28 11:35:40 crc kubenswrapper[4772]: I1128 11:35:40.054551 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-j9ck9"] Nov 28 11:35:40 crc kubenswrapper[4772]: I1128 11:35:40.063402 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-j9ck9"] Nov 28 11:35:42 crc kubenswrapper[4772]: I1128 11:35:42.031665 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59e2ef4f-2f84-4b24-9a30-27fea471faf5" path="/var/lib/kubelet/pods/59e2ef4f-2f84-4b24-9a30-27fea471faf5/volumes" Nov 28 11:35:43 crc kubenswrapper[4772]: I1128 11:35:43.043553 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-72xt2"] Nov 28 11:35:43 crc kubenswrapper[4772]: I1128 11:35:43.059644 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-72xt2"] Nov 28 11:35:44 crc kubenswrapper[4772]: I1128 11:35:44.009109 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe" path="/var/lib/kubelet/pods/2d7cfd6d-5cc3-4f85-9052-606a2bbe5dfe/volumes" Nov 28 11:35:45 crc kubenswrapper[4772]: I1128 11:35:45.540453 4772 scope.go:117] "RemoveContainer" containerID="3b04f2c7fd2ed503d91c08f1c1cd318ce9b8dcbedb98723a1e067fa8c386b690" Nov 28 11:35:45 crc kubenswrapper[4772]: I1128 11:35:45.597542 4772 scope.go:117] "RemoveContainer" containerID="4c3f2ad0c3c4bc426a6476bf20847d23e2b08a70dcb35d573dc0c522ef0f15df" Nov 28 11:35:45 crc kubenswrapper[4772]: I1128 11:35:45.646957 4772 scope.go:117] "RemoveContainer" containerID="84264bb416fd01c01bb7e11ef9f7e9b973a10f012c5a264cf6c2b9a4fdae3df5" Nov 28 11:35:45 crc kubenswrapper[4772]: I1128 11:35:45.765423 4772 scope.go:117] "RemoveContainer" containerID="97f04da42ccc37d18ada52c2535292efc67ed9c75175f717c689312537f1ee5a" Nov 28 11:35:45 crc kubenswrapper[4772]: I1128 11:35:45.792966 4772 scope.go:117] "RemoveContainer" containerID="1b14c4c8c701e2c3c64864f1646a6c347c5d3a9e2d758e143dcc7ac3fffaa706" Nov 28 11:35:45 crc kubenswrapper[4772]: I1128 11:35:45.850777 4772 scope.go:117] "RemoveContainer" containerID="75e39f58e849bce92ad3b13b58d906f6aa75950642f86a9f1f1db89e64d02e75" Nov 28 11:35:45 crc kubenswrapper[4772]: I1128 11:35:45.996078 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:35:45 crc kubenswrapper[4772]: E1128 11:35:45.996421 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:35:59 crc kubenswrapper[4772]: I1128 11:35:59.996030 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:36:00 crc kubenswrapper[4772]: E1128 11:36:00.002782 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:36:13 crc kubenswrapper[4772]: I1128 11:36:13.997011 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:36:13 crc kubenswrapper[4772]: E1128 11:36:13.998523 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:36:26 crc kubenswrapper[4772]: I1128 11:36:26.218155 4772 generic.go:334] "Generic (PLEG): container finished" podID="9cf12a49-cbf2-4721-92a0-e4f9f88deb0c" containerID="201693d26c2ee0ec7771de415a4bef372b69ce85b5c933366e69bab5373bebbd" exitCode=0 Nov 28 11:36:26 crc kubenswrapper[4772]: I1128 11:36:26.218236 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" event={"ID":"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c","Type":"ContainerDied","Data":"201693d26c2ee0ec7771de415a4bef372b69ce85b5c933366e69bab5373bebbd"} Nov 28 11:36:27 crc kubenswrapper[4772]: I1128 11:36:27.660602 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:36:27 crc kubenswrapper[4772]: I1128 11:36:27.735426 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2g96\" (UniqueName: \"kubernetes.io/projected/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-kube-api-access-t2g96\") pod \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\" (UID: \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\") " Nov 28 11:36:27 crc kubenswrapper[4772]: I1128 11:36:27.735577 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-ssh-key\") pod \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\" (UID: \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\") " Nov 28 11:36:27 crc kubenswrapper[4772]: I1128 11:36:27.735699 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-inventory\") pod \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\" (UID: \"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c\") " Nov 28 11:36:27 crc kubenswrapper[4772]: I1128 11:36:27.743086 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-kube-api-access-t2g96" (OuterVolumeSpecName: "kube-api-access-t2g96") pod "9cf12a49-cbf2-4721-92a0-e4f9f88deb0c" (UID: "9cf12a49-cbf2-4721-92a0-e4f9f88deb0c"). InnerVolumeSpecName "kube-api-access-t2g96". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:36:27 crc kubenswrapper[4772]: I1128 11:36:27.776608 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-inventory" (OuterVolumeSpecName: "inventory") pod "9cf12a49-cbf2-4721-92a0-e4f9f88deb0c" (UID: "9cf12a49-cbf2-4721-92a0-e4f9f88deb0c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:36:27 crc kubenswrapper[4772]: I1128 11:36:27.776729 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9cf12a49-cbf2-4721-92a0-e4f9f88deb0c" (UID: "9cf12a49-cbf2-4721-92a0-e4f9f88deb0c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:36:27 crc kubenswrapper[4772]: I1128 11:36:27.838593 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2g96\" (UniqueName: \"kubernetes.io/projected/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-kube-api-access-t2g96\") on node \"crc\" DevicePath \"\"" Nov 28 11:36:27 crc kubenswrapper[4772]: I1128 11:36:27.838646 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:36:27 crc kubenswrapper[4772]: I1128 11:36:27.838660 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cf12a49-cbf2-4721-92a0-e4f9f88deb0c-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:36:27 crc kubenswrapper[4772]: I1128 11:36:27.994546 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:36:27 crc kubenswrapper[4772]: E1128 11:36:27.995114 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.271692 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" event={"ID":"9cf12a49-cbf2-4721-92a0-e4f9f88deb0c","Type":"ContainerDied","Data":"48593e53086667e82c7a04f0164b4773e79128a9105a7a0c87dd5eabfd51a1a2"} Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.278605 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48593e53086667e82c7a04f0164b4773e79128a9105a7a0c87dd5eabfd51a1a2" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.272448 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-68zj7" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.359593 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp"] Nov 28 11:36:28 crc kubenswrapper[4772]: E1128 11:36:28.360400 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf12a49-cbf2-4721-92a0-e4f9f88deb0c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.360429 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf12a49-cbf2-4721-92a0-e4f9f88deb0c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.360734 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf12a49-cbf2-4721-92a0-e4f9f88deb0c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.361773 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.365599 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.366195 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.366473 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.366754 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.371828 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp"] Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.461678 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dw5g\" (UniqueName: \"kubernetes.io/projected/553d598e-4476-450b-952b-f8269626bfa5-kube-api-access-6dw5g\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t78xp\" (UID: \"553d598e-4476-450b-952b-f8269626bfa5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.461984 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/553d598e-4476-450b-952b-f8269626bfa5-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t78xp\" (UID: \"553d598e-4476-450b-952b-f8269626bfa5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.462129 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553d598e-4476-450b-952b-f8269626bfa5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t78xp\" (UID: \"553d598e-4476-450b-952b-f8269626bfa5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.564202 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/553d598e-4476-450b-952b-f8269626bfa5-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t78xp\" (UID: \"553d598e-4476-450b-952b-f8269626bfa5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.564307 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553d598e-4476-450b-952b-f8269626bfa5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t78xp\" (UID: \"553d598e-4476-450b-952b-f8269626bfa5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.564473 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dw5g\" (UniqueName: \"kubernetes.io/projected/553d598e-4476-450b-952b-f8269626bfa5-kube-api-access-6dw5g\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t78xp\" (UID: \"553d598e-4476-450b-952b-f8269626bfa5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.572132 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553d598e-4476-450b-952b-f8269626bfa5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t78xp\" (UID: \"553d598e-4476-450b-952b-f8269626bfa5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.572684 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/553d598e-4476-450b-952b-f8269626bfa5-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t78xp\" (UID: \"553d598e-4476-450b-952b-f8269626bfa5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.595593 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dw5g\" (UniqueName: \"kubernetes.io/projected/553d598e-4476-450b-952b-f8269626bfa5-kube-api-access-6dw5g\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t78xp\" (UID: \"553d598e-4476-450b-952b-f8269626bfa5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:28 crc kubenswrapper[4772]: I1128 11:36:28.693541 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:29 crc kubenswrapper[4772]: I1128 11:36:29.341080 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp"] Nov 28 11:36:30 crc kubenswrapper[4772]: I1128 11:36:30.294046 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" event={"ID":"553d598e-4476-450b-952b-f8269626bfa5","Type":"ContainerStarted","Data":"7b6831251ad8bcf5f59a4b89b035566b799baf200f7bdd4655b875b63241ca2f"} Nov 28 11:36:31 crc kubenswrapper[4772]: I1128 11:36:31.305602 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" event={"ID":"553d598e-4476-450b-952b-f8269626bfa5","Type":"ContainerStarted","Data":"df49ade67cc392bc9aa1234bdc33fea91f9d188ed16d84fbbb778a12ceadad3e"} Nov 28 11:36:36 crc kubenswrapper[4772]: I1128 11:36:36.357896 4772 generic.go:334] "Generic (PLEG): container finished" podID="553d598e-4476-450b-952b-f8269626bfa5" containerID="df49ade67cc392bc9aa1234bdc33fea91f9d188ed16d84fbbb778a12ceadad3e" exitCode=0 Nov 28 11:36:36 crc kubenswrapper[4772]: I1128 11:36:36.357979 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" event={"ID":"553d598e-4476-450b-952b-f8269626bfa5","Type":"ContainerDied","Data":"df49ade67cc392bc9aa1234bdc33fea91f9d188ed16d84fbbb778a12ceadad3e"} Nov 28 11:36:37 crc kubenswrapper[4772]: I1128 11:36:37.839948 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.019266 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553d598e-4476-450b-952b-f8269626bfa5-inventory\") pod \"553d598e-4476-450b-952b-f8269626bfa5\" (UID: \"553d598e-4476-450b-952b-f8269626bfa5\") " Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.019345 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/553d598e-4476-450b-952b-f8269626bfa5-ssh-key\") pod \"553d598e-4476-450b-952b-f8269626bfa5\" (UID: \"553d598e-4476-450b-952b-f8269626bfa5\") " Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.019466 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dw5g\" (UniqueName: \"kubernetes.io/projected/553d598e-4476-450b-952b-f8269626bfa5-kube-api-access-6dw5g\") pod \"553d598e-4476-450b-952b-f8269626bfa5\" (UID: \"553d598e-4476-450b-952b-f8269626bfa5\") " Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.027868 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553d598e-4476-450b-952b-f8269626bfa5-kube-api-access-6dw5g" (OuterVolumeSpecName: "kube-api-access-6dw5g") pod "553d598e-4476-450b-952b-f8269626bfa5" (UID: "553d598e-4476-450b-952b-f8269626bfa5"). InnerVolumeSpecName "kube-api-access-6dw5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.085567 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553d598e-4476-450b-952b-f8269626bfa5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "553d598e-4476-450b-952b-f8269626bfa5" (UID: "553d598e-4476-450b-952b-f8269626bfa5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.100048 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553d598e-4476-450b-952b-f8269626bfa5-inventory" (OuterVolumeSpecName: "inventory") pod "553d598e-4476-450b-952b-f8269626bfa5" (UID: "553d598e-4476-450b-952b-f8269626bfa5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.121893 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/553d598e-4476-450b-952b-f8269626bfa5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.121932 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dw5g\" (UniqueName: \"kubernetes.io/projected/553d598e-4476-450b-952b-f8269626bfa5-kube-api-access-6dw5g\") on node \"crc\" DevicePath \"\"" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.121950 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553d598e-4476-450b-952b-f8269626bfa5-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.187425 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vzlbg"] Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.187894 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-a871-account-create-update-8tsff"] Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.187980 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2q5qg"] Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.188050 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-115c-account-create-update-l4s9j"] Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.188117 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mkk4l"] Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.188186 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vzlbg"] Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.188263 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2q5qg"] Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.188337 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mkk4l"] Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.188446 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-a871-account-create-update-8tsff"] Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.188518 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-115c-account-create-update-l4s9j"] Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.386637 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" event={"ID":"553d598e-4476-450b-952b-f8269626bfa5","Type":"ContainerDied","Data":"7b6831251ad8bcf5f59a4b89b035566b799baf200f7bdd4655b875b63241ca2f"} Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.387121 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b6831251ad8bcf5f59a4b89b035566b799baf200f7bdd4655b875b63241ca2f" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.386728 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t78xp" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.477979 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5"] Nov 28 11:36:38 crc kubenswrapper[4772]: E1128 11:36:38.478456 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553d598e-4476-450b-952b-f8269626bfa5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.478481 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="553d598e-4476-450b-952b-f8269626bfa5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.478718 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="553d598e-4476-450b-952b-f8269626bfa5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.479476 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.482014 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.482136 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.482727 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.483585 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.500046 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5"] Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.631373 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65b46c0-b274-454b-b98d-b2425334abfd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-78lv5\" (UID: \"e65b46c0-b274-454b-b98d-b2425334abfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.631430 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx5wx\" (UniqueName: \"kubernetes.io/projected/e65b46c0-b274-454b-b98d-b2425334abfd-kube-api-access-vx5wx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-78lv5\" (UID: \"e65b46c0-b274-454b-b98d-b2425334abfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.631809 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e65b46c0-b274-454b-b98d-b2425334abfd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-78lv5\" (UID: \"e65b46c0-b274-454b-b98d-b2425334abfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.734336 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e65b46c0-b274-454b-b98d-b2425334abfd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-78lv5\" (UID: \"e65b46c0-b274-454b-b98d-b2425334abfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.734582 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65b46c0-b274-454b-b98d-b2425334abfd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-78lv5\" (UID: \"e65b46c0-b274-454b-b98d-b2425334abfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.734677 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx5wx\" (UniqueName: \"kubernetes.io/projected/e65b46c0-b274-454b-b98d-b2425334abfd-kube-api-access-vx5wx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-78lv5\" (UID: \"e65b46c0-b274-454b-b98d-b2425334abfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.740071 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65b46c0-b274-454b-b98d-b2425334abfd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-78lv5\" (UID: \"e65b46c0-b274-454b-b98d-b2425334abfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.741280 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e65b46c0-b274-454b-b98d-b2425334abfd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-78lv5\" (UID: \"e65b46c0-b274-454b-b98d-b2425334abfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.753525 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx5wx\" (UniqueName: \"kubernetes.io/projected/e65b46c0-b274-454b-b98d-b2425334abfd-kube-api-access-vx5wx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-78lv5\" (UID: \"e65b46c0-b274-454b-b98d-b2425334abfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:36:38 crc kubenswrapper[4772]: I1128 11:36:38.799907 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:36:39 crc kubenswrapper[4772]: I1128 11:36:39.046881 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e98e-account-create-update-77lrr"] Nov 28 11:36:39 crc kubenswrapper[4772]: I1128 11:36:39.066439 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e98e-account-create-update-77lrr"] Nov 28 11:36:39 crc kubenswrapper[4772]: I1128 11:36:39.448691 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5"] Nov 28 11:36:39 crc kubenswrapper[4772]: W1128 11:36:39.458050 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode65b46c0_b274_454b_b98d_b2425334abfd.slice/crio-a3886d2b95a648f1fcceffd26d4af7d7204a4ff4645824ef94c12456e59412bf WatchSource:0}: Error finding container a3886d2b95a648f1fcceffd26d4af7d7204a4ff4645824ef94c12456e59412bf: Status 404 returned error can't find the container with id a3886d2b95a648f1fcceffd26d4af7d7204a4ff4645824ef94c12456e59412bf Nov 28 11:36:40 crc kubenswrapper[4772]: I1128 11:36:40.016039 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10e51d4a-d9ff-4393-bb1f-5ad90c13096f" path="/var/lib/kubelet/pods/10e51d4a-d9ff-4393-bb1f-5ad90c13096f/volumes" Nov 28 11:36:40 crc kubenswrapper[4772]: I1128 11:36:40.017397 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11fb5581-331a-4b2b-9cae-ca7679b297b2" path="/var/lib/kubelet/pods/11fb5581-331a-4b2b-9cae-ca7679b297b2/volumes" Nov 28 11:36:40 crc kubenswrapper[4772]: I1128 11:36:40.019155 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a2af00-c296-4e60-849c-e0157763aaa8" path="/var/lib/kubelet/pods/27a2af00-c296-4e60-849c-e0157763aaa8/volumes" Nov 28 11:36:40 crc kubenswrapper[4772]: I1128 11:36:40.020435 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3068411c-0dc8-47e4-a58b-abf587764c20" path="/var/lib/kubelet/pods/3068411c-0dc8-47e4-a58b-abf587764c20/volumes" Nov 28 11:36:40 crc kubenswrapper[4772]: I1128 11:36:40.022627 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7440d09-9e03-450e-bbda-f0680e63dac4" path="/var/lib/kubelet/pods/b7440d09-9e03-450e-bbda-f0680e63dac4/volumes" Nov 28 11:36:40 crc kubenswrapper[4772]: I1128 11:36:40.023814 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe1b7037-466b-4290-be1a-7ede41184bfc" path="/var/lib/kubelet/pods/fe1b7037-466b-4290-be1a-7ede41184bfc/volumes" Nov 28 11:36:40 crc kubenswrapper[4772]: I1128 11:36:40.417321 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" event={"ID":"e65b46c0-b274-454b-b98d-b2425334abfd","Type":"ContainerStarted","Data":"a3886d2b95a648f1fcceffd26d4af7d7204a4ff4645824ef94c12456e59412bf"} Nov 28 11:36:41 crc kubenswrapper[4772]: I1128 11:36:41.433143 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" event={"ID":"e65b46c0-b274-454b-b98d-b2425334abfd","Type":"ContainerStarted","Data":"e4789a74b8ce5e9414ea69838f51321ca07ff7e4a2f7a86daac561d3e0da81c2"} Nov 28 11:36:41 crc kubenswrapper[4772]: I1128 11:36:41.476602 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" podStartSLOduration=2.811550952 podStartE2EDuration="3.47655983s" podCreationTimestamp="2025-11-28 11:36:38 +0000 UTC" firstStartedPulling="2025-11-28 11:36:39.461508005 +0000 UTC m=+1797.784751232" lastFinishedPulling="2025-11-28 11:36:40.126516843 +0000 UTC m=+1798.449760110" observedRunningTime="2025-11-28 11:36:41.458505772 +0000 UTC m=+1799.781749059" watchObservedRunningTime="2025-11-28 11:36:41.47655983 +0000 UTC m=+1799.799803087" Nov 28 11:36:42 crc kubenswrapper[4772]: I1128 11:36:42.014618 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:36:42 crc kubenswrapper[4772]: E1128 11:36:42.015461 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:36:45 crc kubenswrapper[4772]: I1128 11:36:45.996662 4772 scope.go:117] "RemoveContainer" containerID="2b59dc554a6b2661925f76e51a84ab82ad708a154ae29c90beb48f9e058787a0" Nov 28 11:36:46 crc kubenswrapper[4772]: I1128 11:36:46.035163 4772 scope.go:117] "RemoveContainer" containerID="c4556fed18bd7278f59379064d9e83ad1c101780f41a8d09b83f20f286f4d7a8" Nov 28 11:36:46 crc kubenswrapper[4772]: I1128 11:36:46.096104 4772 scope.go:117] "RemoveContainer" containerID="6bdbccb0995d232a89d0137697602459ed8d809c0ebc5068cabe773bb605996e" Nov 28 11:36:46 crc kubenswrapper[4772]: I1128 11:36:46.126095 4772 scope.go:117] "RemoveContainer" containerID="5941f2beaf68fdb40dd4f34ec18562017a951a423c67454ff3ae25516da8a83a" Nov 28 11:36:46 crc kubenswrapper[4772]: I1128 11:36:46.171661 4772 scope.go:117] "RemoveContainer" containerID="6b9e227bfa469621cfb27e17895b3b47ccd09a46d896d36764f3034cdfc68e71" Nov 28 11:36:46 crc kubenswrapper[4772]: I1128 11:36:46.209139 4772 scope.go:117] "RemoveContainer" containerID="8e4f54838d407ea8bc59725e4884b847b2c9262e16c00908382b8b9774706ce5" Nov 28 11:36:55 crc kubenswrapper[4772]: I1128 11:36:55.994154 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:36:55 crc kubenswrapper[4772]: E1128 11:36:55.995005 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:37:04 crc kubenswrapper[4772]: I1128 11:37:04.049423 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8jh8h"] Nov 28 11:37:04 crc kubenswrapper[4772]: I1128 11:37:04.067400 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8jh8h"] Nov 28 11:37:06 crc kubenswrapper[4772]: I1128 11:37:06.013828 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8efe8b01-e196-469d-b817-4864b4de95d4" path="/var/lib/kubelet/pods/8efe8b01-e196-469d-b817-4864b4de95d4/volumes" Nov 28 11:37:07 crc kubenswrapper[4772]: I1128 11:37:07.994495 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:37:07 crc kubenswrapper[4772]: E1128 11:37:07.994936 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:37:22 crc kubenswrapper[4772]: I1128 11:37:22.001676 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:37:22 crc kubenswrapper[4772]: E1128 11:37:22.002577 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:37:22 crc kubenswrapper[4772]: I1128 11:37:22.873037 4772 generic.go:334] "Generic (PLEG): container finished" podID="e65b46c0-b274-454b-b98d-b2425334abfd" containerID="e4789a74b8ce5e9414ea69838f51321ca07ff7e4a2f7a86daac561d3e0da81c2" exitCode=0 Nov 28 11:37:22 crc kubenswrapper[4772]: I1128 11:37:22.873295 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" event={"ID":"e65b46c0-b274-454b-b98d-b2425334abfd","Type":"ContainerDied","Data":"e4789a74b8ce5e9414ea69838f51321ca07ff7e4a2f7a86daac561d3e0da81c2"} Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.473330 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.603309 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx5wx\" (UniqueName: \"kubernetes.io/projected/e65b46c0-b274-454b-b98d-b2425334abfd-kube-api-access-vx5wx\") pod \"e65b46c0-b274-454b-b98d-b2425334abfd\" (UID: \"e65b46c0-b274-454b-b98d-b2425334abfd\") " Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.603578 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e65b46c0-b274-454b-b98d-b2425334abfd-ssh-key\") pod \"e65b46c0-b274-454b-b98d-b2425334abfd\" (UID: \"e65b46c0-b274-454b-b98d-b2425334abfd\") " Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.603657 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65b46c0-b274-454b-b98d-b2425334abfd-inventory\") pod \"e65b46c0-b274-454b-b98d-b2425334abfd\" (UID: \"e65b46c0-b274-454b-b98d-b2425334abfd\") " Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.615174 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65b46c0-b274-454b-b98d-b2425334abfd-kube-api-access-vx5wx" (OuterVolumeSpecName: "kube-api-access-vx5wx") pod "e65b46c0-b274-454b-b98d-b2425334abfd" (UID: "e65b46c0-b274-454b-b98d-b2425334abfd"). InnerVolumeSpecName "kube-api-access-vx5wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.639067 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65b46c0-b274-454b-b98d-b2425334abfd-inventory" (OuterVolumeSpecName: "inventory") pod "e65b46c0-b274-454b-b98d-b2425334abfd" (UID: "e65b46c0-b274-454b-b98d-b2425334abfd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.653994 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65b46c0-b274-454b-b98d-b2425334abfd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e65b46c0-b274-454b-b98d-b2425334abfd" (UID: "e65b46c0-b274-454b-b98d-b2425334abfd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.706570 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx5wx\" (UniqueName: \"kubernetes.io/projected/e65b46c0-b274-454b-b98d-b2425334abfd-kube-api-access-vx5wx\") on node \"crc\" DevicePath \"\"" Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.706629 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e65b46c0-b274-454b-b98d-b2425334abfd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.706641 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65b46c0-b274-454b-b98d-b2425334abfd-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.894130 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" event={"ID":"e65b46c0-b274-454b-b98d-b2425334abfd","Type":"ContainerDied","Data":"a3886d2b95a648f1fcceffd26d4af7d7204a4ff4645824ef94c12456e59412bf"} Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.894713 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3886d2b95a648f1fcceffd26d4af7d7204a4ff4645824ef94c12456e59412bf" Nov 28 11:37:24 crc kubenswrapper[4772]: I1128 11:37:24.894209 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-78lv5" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.018070 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j"] Nov 28 11:37:25 crc kubenswrapper[4772]: E1128 11:37:25.018825 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65b46c0-b274-454b-b98d-b2425334abfd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.018846 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65b46c0-b274-454b-b98d-b2425334abfd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.019168 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65b46c0-b274-454b-b98d-b2425334abfd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.020275 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.027722 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.027836 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.028935 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.029152 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.049260 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j"] Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.115111 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j\" (UID: \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.115198 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v7bt\" (UniqueName: \"kubernetes.io/projected/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-kube-api-access-4v7bt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j\" (UID: \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.115554 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j\" (UID: \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.216792 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j\" (UID: \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.217026 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j\" (UID: \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.217107 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v7bt\" (UniqueName: \"kubernetes.io/projected/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-kube-api-access-4v7bt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j\" (UID: \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.224291 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j\" (UID: \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.225106 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j\" (UID: \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.239162 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v7bt\" (UniqueName: \"kubernetes.io/projected/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-kube-api-access-4v7bt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j\" (UID: \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.343595 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:37:25 crc kubenswrapper[4772]: I1128 11:37:25.944587 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j"] Nov 28 11:37:25 crc kubenswrapper[4772]: W1128 11:37:25.957521 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5a1a6ce_5a94_4f1a_a6c4_5bd7b71782fe.slice/crio-af2f4e416f84e6095be704b6601de8daf57374515fcc236a022b3ca8ea289bb3 WatchSource:0}: Error finding container af2f4e416f84e6095be704b6601de8daf57374515fcc236a022b3ca8ea289bb3: Status 404 returned error can't find the container with id af2f4e416f84e6095be704b6601de8daf57374515fcc236a022b3ca8ea289bb3 Nov 28 11:37:26 crc kubenswrapper[4772]: I1128 11:37:26.921257 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" event={"ID":"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe","Type":"ContainerStarted","Data":"65729d155ccab0e3a7b7731cdeffbea9ec71d1b2654525a80a6c534d7b91968a"} Nov 28 11:37:26 crc kubenswrapper[4772]: I1128 11:37:26.922337 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" event={"ID":"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe","Type":"ContainerStarted","Data":"af2f4e416f84e6095be704b6601de8daf57374515fcc236a022b3ca8ea289bb3"} Nov 28 11:37:26 crc kubenswrapper[4772]: I1128 11:37:26.939741 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" podStartSLOduration=2.412056015 podStartE2EDuration="2.939725042s" podCreationTimestamp="2025-11-28 11:37:24 +0000 UTC" firstStartedPulling="2025-11-28 11:37:25.960406741 +0000 UTC m=+1844.283650008" lastFinishedPulling="2025-11-28 11:37:26.488075808 +0000 UTC m=+1844.811319035" observedRunningTime="2025-11-28 11:37:26.938572231 +0000 UTC m=+1845.261815468" watchObservedRunningTime="2025-11-28 11:37:26.939725042 +0000 UTC m=+1845.262968269" Nov 28 11:37:29 crc kubenswrapper[4772]: I1128 11:37:29.056222 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-68hmw"] Nov 28 11:37:29 crc kubenswrapper[4772]: I1128 11:37:29.068124 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-68hmw"] Nov 28 11:37:30 crc kubenswrapper[4772]: I1128 11:37:30.016534 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e87f3486-967b-47b7-87ff-b3c11d66e63c" path="/var/lib/kubelet/pods/e87f3486-967b-47b7-87ff-b3c11d66e63c/volumes" Nov 28 11:37:31 crc kubenswrapper[4772]: I1128 11:37:31.035132 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lpnr6"] Nov 28 11:37:31 crc kubenswrapper[4772]: I1128 11:37:31.045081 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lpnr6"] Nov 28 11:37:32 crc kubenswrapper[4772]: I1128 11:37:32.008569 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf647836-f37f-448a-ac84-c610cf7c0125" path="/var/lib/kubelet/pods/bf647836-f37f-448a-ac84-c610cf7c0125/volumes" Nov 28 11:37:32 crc kubenswrapper[4772]: I1128 11:37:32.994802 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:37:32 crc kubenswrapper[4772]: E1128 11:37:32.995258 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:37:46 crc kubenswrapper[4772]: I1128 11:37:46.384308 4772 scope.go:117] "RemoveContainer" containerID="9fc79052759f344a4d0f69e253d2282dd38880bee86bf1f1f7a06e96d3e72b59" Nov 28 11:37:46 crc kubenswrapper[4772]: I1128 11:37:46.457192 4772 scope.go:117] "RemoveContainer" containerID="24f05ed572a5fcc4353f42b0c4e40ca5c6e80d443b4544ff27d735fb05f312b5" Nov 28 11:37:46 crc kubenswrapper[4772]: I1128 11:37:46.519382 4772 scope.go:117] "RemoveContainer" containerID="bb6d3d966ff672c7337ae5a3f560b4a3bd0f73652b049cc3f8f575f5bf568d4a" Nov 28 11:37:47 crc kubenswrapper[4772]: I1128 11:37:47.995173 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:37:47 crc kubenswrapper[4772]: E1128 11:37:47.995675 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:38:02 crc kubenswrapper[4772]: I1128 11:38:02.995092 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:38:02 crc kubenswrapper[4772]: E1128 11:38:02.996066 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:38:14 crc kubenswrapper[4772]: I1128 11:38:14.048499 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bj2mg"] Nov 28 11:38:14 crc kubenswrapper[4772]: I1128 11:38:14.059851 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bj2mg"] Nov 28 11:38:16 crc kubenswrapper[4772]: I1128 11:38:16.006579 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b1f57a-1490-46e4-82db-913a03cf8750" path="/var/lib/kubelet/pods/97b1f57a-1490-46e4-82db-913a03cf8750/volumes" Nov 28 11:38:16 crc kubenswrapper[4772]: I1128 11:38:16.998203 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6nqwr"] Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.002967 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.021725 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6nqwr"] Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.162620 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-catalog-content\") pod \"community-operators-6nqwr\" (UID: \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\") " pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.163036 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgqj\" (UniqueName: \"kubernetes.io/projected/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-kube-api-access-rqgqj\") pod \"community-operators-6nqwr\" (UID: \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\") " pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.163144 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-utilities\") pod \"community-operators-6nqwr\" (UID: \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\") " pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.265053 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqgqj\" (UniqueName: \"kubernetes.io/projected/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-kube-api-access-rqgqj\") pod \"community-operators-6nqwr\" (UID: \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\") " pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.265116 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-utilities\") pod \"community-operators-6nqwr\" (UID: \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\") " pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.265211 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-catalog-content\") pod \"community-operators-6nqwr\" (UID: \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\") " pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.266016 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-catalog-content\") pod \"community-operators-6nqwr\" (UID: \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\") " pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.266159 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-utilities\") pod \"community-operators-6nqwr\" (UID: \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\") " pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.288861 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqgqj\" (UniqueName: \"kubernetes.io/projected/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-kube-api-access-rqgqj\") pod \"community-operators-6nqwr\" (UID: \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\") " pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.347817 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.925717 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6nqwr"] Nov 28 11:38:17 crc kubenswrapper[4772]: I1128 11:38:17.995035 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:38:17 crc kubenswrapper[4772]: E1128 11:38:17.995401 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:38:18 crc kubenswrapper[4772]: I1128 11:38:18.481947 4772 generic.go:334] "Generic (PLEG): container finished" podID="e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" containerID="4eb3add3ab0721e717714a1fc395274209b6bfc27d70a10572b9dc07da2f21e5" exitCode=0 Nov 28 11:38:18 crc kubenswrapper[4772]: I1128 11:38:18.481993 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqwr" event={"ID":"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0","Type":"ContainerDied","Data":"4eb3add3ab0721e717714a1fc395274209b6bfc27d70a10572b9dc07da2f21e5"} Nov 28 11:38:18 crc kubenswrapper[4772]: I1128 11:38:18.482022 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqwr" event={"ID":"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0","Type":"ContainerStarted","Data":"850fd20af3e5030ad364cbc52558cc4716fc78ffc78ca6897013d6d3cf3b30ba"} Nov 28 11:38:18 crc kubenswrapper[4772]: I1128 11:38:18.484976 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 11:38:19 crc kubenswrapper[4772]: I1128 11:38:19.506602 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqwr" event={"ID":"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0","Type":"ContainerStarted","Data":"4f627ae50fda3a947248d065ae9e57be1f55775a6b344d7fc51486e62239775f"} Nov 28 11:38:20 crc kubenswrapper[4772]: I1128 11:38:20.516300 4772 generic.go:334] "Generic (PLEG): container finished" podID="e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" containerID="4f627ae50fda3a947248d065ae9e57be1f55775a6b344d7fc51486e62239775f" exitCode=0 Nov 28 11:38:20 crc kubenswrapper[4772]: I1128 11:38:20.516627 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqwr" event={"ID":"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0","Type":"ContainerDied","Data":"4f627ae50fda3a947248d065ae9e57be1f55775a6b344d7fc51486e62239775f"} Nov 28 11:38:21 crc kubenswrapper[4772]: I1128 11:38:21.527145 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqwr" event={"ID":"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0","Type":"ContainerStarted","Data":"d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1"} Nov 28 11:38:21 crc kubenswrapper[4772]: I1128 11:38:21.550630 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6nqwr" podStartSLOduration=3.061032595 podStartE2EDuration="5.550608222s" podCreationTimestamp="2025-11-28 11:38:16 +0000 UTC" firstStartedPulling="2025-11-28 11:38:18.484756995 +0000 UTC m=+1896.808000222" lastFinishedPulling="2025-11-28 11:38:20.974332622 +0000 UTC m=+1899.297575849" observedRunningTime="2025-11-28 11:38:21.546976243 +0000 UTC m=+1899.870219490" watchObservedRunningTime="2025-11-28 11:38:21.550608222 +0000 UTC m=+1899.873851449" Nov 28 11:38:26 crc kubenswrapper[4772]: I1128 11:38:26.594814 4772 generic.go:334] "Generic (PLEG): container finished" podID="b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe" containerID="65729d155ccab0e3a7b7731cdeffbea9ec71d1b2654525a80a6c534d7b91968a" exitCode=0 Nov 28 11:38:26 crc kubenswrapper[4772]: I1128 11:38:26.594907 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" event={"ID":"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe","Type":"ContainerDied","Data":"65729d155ccab0e3a7b7731cdeffbea9ec71d1b2654525a80a6c534d7b91968a"} Nov 28 11:38:27 crc kubenswrapper[4772]: I1128 11:38:27.349183 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:27 crc kubenswrapper[4772]: I1128 11:38:27.350016 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:27 crc kubenswrapper[4772]: I1128 11:38:27.431676 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:27 crc kubenswrapper[4772]: I1128 11:38:27.679112 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:27 crc kubenswrapper[4772]: I1128 11:38:27.752396 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6nqwr"] Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.108428 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.295446 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v7bt\" (UniqueName: \"kubernetes.io/projected/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-kube-api-access-4v7bt\") pod \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\" (UID: \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\") " Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.295608 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-inventory\") pod \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\" (UID: \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\") " Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.295663 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-ssh-key\") pod \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\" (UID: \"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe\") " Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.304735 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-kube-api-access-4v7bt" (OuterVolumeSpecName: "kube-api-access-4v7bt") pod "b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe" (UID: "b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe"). InnerVolumeSpecName "kube-api-access-4v7bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.353851 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-inventory" (OuterVolumeSpecName: "inventory") pod "b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe" (UID: "b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.361641 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe" (UID: "b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.399545 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v7bt\" (UniqueName: \"kubernetes.io/projected/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-kube-api-access-4v7bt\") on node \"crc\" DevicePath \"\"" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.399611 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.399636 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.616453 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.616470 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j" event={"ID":"b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe","Type":"ContainerDied","Data":"af2f4e416f84e6095be704b6601de8daf57374515fcc236a022b3ca8ea289bb3"} Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.616549 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af2f4e416f84e6095be704b6601de8daf57374515fcc236a022b3ca8ea289bb3" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.725309 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wwp4z"] Nov 28 11:38:28 crc kubenswrapper[4772]: E1128 11:38:28.726055 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.726091 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.726429 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.727548 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.731166 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.731211 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.731784 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.732506 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.755734 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wwp4z"] Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.808504 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8e844804-6f77-4b6c-93c7-cdc083c5673d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wwp4z\" (UID: \"8e844804-6f77-4b6c-93c7-cdc083c5673d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.808568 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k729n\" (UniqueName: \"kubernetes.io/projected/8e844804-6f77-4b6c-93c7-cdc083c5673d-kube-api-access-k729n\") pod \"ssh-known-hosts-edpm-deployment-wwp4z\" (UID: \"8e844804-6f77-4b6c-93c7-cdc083c5673d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.808726 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e844804-6f77-4b6c-93c7-cdc083c5673d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wwp4z\" (UID: \"8e844804-6f77-4b6c-93c7-cdc083c5673d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.910808 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8e844804-6f77-4b6c-93c7-cdc083c5673d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wwp4z\" (UID: \"8e844804-6f77-4b6c-93c7-cdc083c5673d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.910862 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k729n\" (UniqueName: \"kubernetes.io/projected/8e844804-6f77-4b6c-93c7-cdc083c5673d-kube-api-access-k729n\") pod \"ssh-known-hosts-edpm-deployment-wwp4z\" (UID: \"8e844804-6f77-4b6c-93c7-cdc083c5673d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.910914 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e844804-6f77-4b6c-93c7-cdc083c5673d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wwp4z\" (UID: \"8e844804-6f77-4b6c-93c7-cdc083c5673d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.921552 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e844804-6f77-4b6c-93c7-cdc083c5673d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wwp4z\" (UID: \"8e844804-6f77-4b6c-93c7-cdc083c5673d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.921660 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8e844804-6f77-4b6c-93c7-cdc083c5673d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wwp4z\" (UID: \"8e844804-6f77-4b6c-93c7-cdc083c5673d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:28 crc kubenswrapper[4772]: I1128 11:38:28.934771 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k729n\" (UniqueName: \"kubernetes.io/projected/8e844804-6f77-4b6c-93c7-cdc083c5673d-kube-api-access-k729n\") pod \"ssh-known-hosts-edpm-deployment-wwp4z\" (UID: \"8e844804-6f77-4b6c-93c7-cdc083c5673d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:29 crc kubenswrapper[4772]: I1128 11:38:29.052731 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:29 crc kubenswrapper[4772]: I1128 11:38:29.627273 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6nqwr" podUID="e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" containerName="registry-server" containerID="cri-o://d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1" gracePeriod=2 Nov 28 11:38:29 crc kubenswrapper[4772]: I1128 11:38:29.650066 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wwp4z"] Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.167906 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.239300 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqgqj\" (UniqueName: \"kubernetes.io/projected/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-kube-api-access-rqgqj\") pod \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\" (UID: \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\") " Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.240109 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-catalog-content\") pod \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\" (UID: \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\") " Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.240226 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-utilities\") pod \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\" (UID: \"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0\") " Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.240973 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-utilities" (OuterVolumeSpecName: "utilities") pod "e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" (UID: "e88c4a6c-f7c9-4bc8-bfef-7c784e416db0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.247570 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-kube-api-access-rqgqj" (OuterVolumeSpecName: "kube-api-access-rqgqj") pod "e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" (UID: "e88c4a6c-f7c9-4bc8-bfef-7c784e416db0"). InnerVolumeSpecName "kube-api-access-rqgqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.341253 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.341301 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqgqj\" (UniqueName: \"kubernetes.io/projected/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-kube-api-access-rqgqj\") on node \"crc\" DevicePath \"\"" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.645030 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" (UID: "e88c4a6c-f7c9-4bc8-bfef-7c784e416db0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.646571 4772 generic.go:334] "Generic (PLEG): container finished" podID="e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" containerID="d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1" exitCode=0 Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.646698 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nqwr" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.646665 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqwr" event={"ID":"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0","Type":"ContainerDied","Data":"d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1"} Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.646863 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nqwr" event={"ID":"e88c4a6c-f7c9-4bc8-bfef-7c784e416db0","Type":"ContainerDied","Data":"850fd20af3e5030ad364cbc52558cc4716fc78ffc78ca6897013d6d3cf3b30ba"} Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.646910 4772 scope.go:117] "RemoveContainer" containerID="d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.651146 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.661517 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" event={"ID":"8e844804-6f77-4b6c-93c7-cdc083c5673d","Type":"ContainerStarted","Data":"67ea33485984ef8b56f715765e690d8c2bcba3b04ae40cbb5556b4ee869f3a43"} Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.702446 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6nqwr"] Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.711435 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6nqwr"] Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.733232 4772 scope.go:117] "RemoveContainer" containerID="4f627ae50fda3a947248d065ae9e57be1f55775a6b344d7fc51486e62239775f" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.774726 4772 scope.go:117] "RemoveContainer" containerID="4eb3add3ab0721e717714a1fc395274209b6bfc27d70a10572b9dc07da2f21e5" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.804458 4772 scope.go:117] "RemoveContainer" containerID="d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1" Nov 28 11:38:30 crc kubenswrapper[4772]: E1128 11:38:30.804867 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1\": container with ID starting with d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1 not found: ID does not exist" containerID="d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.804914 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1"} err="failed to get container status \"d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1\": rpc error: code = NotFound desc = could not find container \"d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1\": container with ID starting with d2f11f945d2f446c725f8a245a1cf2470380cb4f433cd5be5fc8d6e797bb47e1 not found: ID does not exist" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.804944 4772 scope.go:117] "RemoveContainer" containerID="4f627ae50fda3a947248d065ae9e57be1f55775a6b344d7fc51486e62239775f" Nov 28 11:38:30 crc kubenswrapper[4772]: E1128 11:38:30.805513 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f627ae50fda3a947248d065ae9e57be1f55775a6b344d7fc51486e62239775f\": container with ID starting with 4f627ae50fda3a947248d065ae9e57be1f55775a6b344d7fc51486e62239775f not found: ID does not exist" containerID="4f627ae50fda3a947248d065ae9e57be1f55775a6b344d7fc51486e62239775f" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.805537 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f627ae50fda3a947248d065ae9e57be1f55775a6b344d7fc51486e62239775f"} err="failed to get container status \"4f627ae50fda3a947248d065ae9e57be1f55775a6b344d7fc51486e62239775f\": rpc error: code = NotFound desc = could not find container \"4f627ae50fda3a947248d065ae9e57be1f55775a6b344d7fc51486e62239775f\": container with ID starting with 4f627ae50fda3a947248d065ae9e57be1f55775a6b344d7fc51486e62239775f not found: ID does not exist" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.805552 4772 scope.go:117] "RemoveContainer" containerID="4eb3add3ab0721e717714a1fc395274209b6bfc27d70a10572b9dc07da2f21e5" Nov 28 11:38:30 crc kubenswrapper[4772]: E1128 11:38:30.805901 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eb3add3ab0721e717714a1fc395274209b6bfc27d70a10572b9dc07da2f21e5\": container with ID starting with 4eb3add3ab0721e717714a1fc395274209b6bfc27d70a10572b9dc07da2f21e5 not found: ID does not exist" containerID="4eb3add3ab0721e717714a1fc395274209b6bfc27d70a10572b9dc07da2f21e5" Nov 28 11:38:30 crc kubenswrapper[4772]: I1128 11:38:30.805934 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb3add3ab0721e717714a1fc395274209b6bfc27d70a10572b9dc07da2f21e5"} err="failed to get container status \"4eb3add3ab0721e717714a1fc395274209b6bfc27d70a10572b9dc07da2f21e5\": rpc error: code = NotFound desc = could not find container \"4eb3add3ab0721e717714a1fc395274209b6bfc27d70a10572b9dc07da2f21e5\": container with ID starting with 4eb3add3ab0721e717714a1fc395274209b6bfc27d70a10572b9dc07da2f21e5 not found: ID does not exist" Nov 28 11:38:31 crc kubenswrapper[4772]: I1128 11:38:31.676014 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" event={"ID":"8e844804-6f77-4b6c-93c7-cdc083c5673d","Type":"ContainerStarted","Data":"6983b4f44e2f9dab24f6792173f9cc707998ea465264ba14361f23a721abba34"} Nov 28 11:38:31 crc kubenswrapper[4772]: I1128 11:38:31.706834 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" podStartSLOduration=2.850396858 podStartE2EDuration="3.706814953s" podCreationTimestamp="2025-11-28 11:38:28 +0000 UTC" firstStartedPulling="2025-11-28 11:38:29.653828489 +0000 UTC m=+1907.977071716" lastFinishedPulling="2025-11-28 11:38:30.510246584 +0000 UTC m=+1908.833489811" observedRunningTime="2025-11-28 11:38:31.704517521 +0000 UTC m=+1910.027760778" watchObservedRunningTime="2025-11-28 11:38:31.706814953 +0000 UTC m=+1910.030058180" Nov 28 11:38:32 crc kubenswrapper[4772]: I1128 11:38:32.003115 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:38:32 crc kubenswrapper[4772]: E1128 11:38:32.003349 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:38:32 crc kubenswrapper[4772]: I1128 11:38:32.011526 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" path="/var/lib/kubelet/pods/e88c4a6c-f7c9-4bc8-bfef-7c784e416db0/volumes" Nov 28 11:38:38 crc kubenswrapper[4772]: I1128 11:38:38.748514 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e844804-6f77-4b6c-93c7-cdc083c5673d" containerID="6983b4f44e2f9dab24f6792173f9cc707998ea465264ba14361f23a721abba34" exitCode=0 Nov 28 11:38:38 crc kubenswrapper[4772]: I1128 11:38:38.748684 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" event={"ID":"8e844804-6f77-4b6c-93c7-cdc083c5673d","Type":"ContainerDied","Data":"6983b4f44e2f9dab24f6792173f9cc707998ea465264ba14361f23a721abba34"} Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.259113 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.368510 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e844804-6f77-4b6c-93c7-cdc083c5673d-ssh-key-openstack-edpm-ipam\") pod \"8e844804-6f77-4b6c-93c7-cdc083c5673d\" (UID: \"8e844804-6f77-4b6c-93c7-cdc083c5673d\") " Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.368661 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k729n\" (UniqueName: \"kubernetes.io/projected/8e844804-6f77-4b6c-93c7-cdc083c5673d-kube-api-access-k729n\") pod \"8e844804-6f77-4b6c-93c7-cdc083c5673d\" (UID: \"8e844804-6f77-4b6c-93c7-cdc083c5673d\") " Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.368985 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8e844804-6f77-4b6c-93c7-cdc083c5673d-inventory-0\") pod \"8e844804-6f77-4b6c-93c7-cdc083c5673d\" (UID: \"8e844804-6f77-4b6c-93c7-cdc083c5673d\") " Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.380744 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e844804-6f77-4b6c-93c7-cdc083c5673d-kube-api-access-k729n" (OuterVolumeSpecName: "kube-api-access-k729n") pod "8e844804-6f77-4b6c-93c7-cdc083c5673d" (UID: "8e844804-6f77-4b6c-93c7-cdc083c5673d"). InnerVolumeSpecName "kube-api-access-k729n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.396198 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e844804-6f77-4b6c-93c7-cdc083c5673d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8e844804-6f77-4b6c-93c7-cdc083c5673d" (UID: "8e844804-6f77-4b6c-93c7-cdc083c5673d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.398274 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e844804-6f77-4b6c-93c7-cdc083c5673d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8e844804-6f77-4b6c-93c7-cdc083c5673d" (UID: "8e844804-6f77-4b6c-93c7-cdc083c5673d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.472228 4772 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8e844804-6f77-4b6c-93c7-cdc083c5673d-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.472266 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e844804-6f77-4b6c-93c7-cdc083c5673d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.472279 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k729n\" (UniqueName: \"kubernetes.io/projected/8e844804-6f77-4b6c-93c7-cdc083c5673d-kube-api-access-k729n\") on node \"crc\" DevicePath \"\"" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.772149 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" event={"ID":"8e844804-6f77-4b6c-93c7-cdc083c5673d","Type":"ContainerDied","Data":"67ea33485984ef8b56f715765e690d8c2bcba3b04ae40cbb5556b4ee869f3a43"} Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.772199 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67ea33485984ef8b56f715765e690d8c2bcba3b04ae40cbb5556b4ee869f3a43" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.772223 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wwp4z" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.884223 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm"] Nov 28 11:38:40 crc kubenswrapper[4772]: E1128 11:38:40.884822 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e844804-6f77-4b6c-93c7-cdc083c5673d" containerName="ssh-known-hosts-edpm-deployment" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.884850 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e844804-6f77-4b6c-93c7-cdc083c5673d" containerName="ssh-known-hosts-edpm-deployment" Nov 28 11:38:40 crc kubenswrapper[4772]: E1128 11:38:40.884889 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" containerName="extract-content" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.884898 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" containerName="extract-content" Nov 28 11:38:40 crc kubenswrapper[4772]: E1128 11:38:40.884909 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" containerName="registry-server" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.884917 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" containerName="registry-server" Nov 28 11:38:40 crc kubenswrapper[4772]: E1128 11:38:40.884927 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" containerName="extract-utilities" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.884935 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" containerName="extract-utilities" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.885169 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e844804-6f77-4b6c-93c7-cdc083c5673d" containerName="ssh-known-hosts-edpm-deployment" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.885183 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e88c4a6c-f7c9-4bc8-bfef-7c784e416db0" containerName="registry-server" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.886174 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.894152 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm"] Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.896580 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.897237 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.897637 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.897927 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.982070 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mxktm\" (UID: \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.982163 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m79z\" (UniqueName: \"kubernetes.io/projected/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-kube-api-access-9m79z\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mxktm\" (UID: \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:40 crc kubenswrapper[4772]: I1128 11:38:40.982190 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mxktm\" (UID: \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:41 crc kubenswrapper[4772]: I1128 11:38:41.083848 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m79z\" (UniqueName: \"kubernetes.io/projected/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-kube-api-access-9m79z\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mxktm\" (UID: \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:41 crc kubenswrapper[4772]: I1128 11:38:41.084619 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mxktm\" (UID: \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:41 crc kubenswrapper[4772]: I1128 11:38:41.084818 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mxktm\" (UID: \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:41 crc kubenswrapper[4772]: I1128 11:38:41.091179 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mxktm\" (UID: \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:41 crc kubenswrapper[4772]: I1128 11:38:41.104424 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mxktm\" (UID: \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:41 crc kubenswrapper[4772]: I1128 11:38:41.114423 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m79z\" (UniqueName: \"kubernetes.io/projected/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-kube-api-access-9m79z\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mxktm\" (UID: \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:41 crc kubenswrapper[4772]: I1128 11:38:41.219337 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:41 crc kubenswrapper[4772]: I1128 11:38:41.901248 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm"] Nov 28 11:38:42 crc kubenswrapper[4772]: I1128 11:38:42.327501 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:38:42 crc kubenswrapper[4772]: I1128 11:38:42.802184 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" event={"ID":"a262a424-caf0-4d6e-95e6-c0ca5ff2473b","Type":"ContainerStarted","Data":"449137a07bde89a29d343c9ca2e933edaa32cc9d148b0eb7e8b27fab1a35aae1"} Nov 28 11:38:42 crc kubenswrapper[4772]: I1128 11:38:42.802624 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" event={"ID":"a262a424-caf0-4d6e-95e6-c0ca5ff2473b","Type":"ContainerStarted","Data":"6d7c7f7cb756ba6fb113d866ffab7aedcba1ef2999470606bdb118f7c446b9bf"} Nov 28 11:38:42 crc kubenswrapper[4772]: I1128 11:38:42.838611 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" podStartSLOduration=2.410701653 podStartE2EDuration="2.838590931s" podCreationTimestamp="2025-11-28 11:38:40 +0000 UTC" firstStartedPulling="2025-11-28 11:38:41.896349717 +0000 UTC m=+1920.219592944" lastFinishedPulling="2025-11-28 11:38:42.324238965 +0000 UTC m=+1920.647482222" observedRunningTime="2025-11-28 11:38:42.832082485 +0000 UTC m=+1921.155325752" watchObservedRunningTime="2025-11-28 11:38:42.838590931 +0000 UTC m=+1921.161834158" Nov 28 11:38:45 crc kubenswrapper[4772]: I1128 11:38:45.994769 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:38:45 crc kubenswrapper[4772]: E1128 11:38:45.995689 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:38:46 crc kubenswrapper[4772]: I1128 11:38:46.652196 4772 scope.go:117] "RemoveContainer" containerID="40433f1908faf255c96bc8ebf48da837755e6ec15f2cb47eeb4573cb1ad41e51" Nov 28 11:38:51 crc kubenswrapper[4772]: E1128 11:38:51.582533 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda262a424_caf0_4d6e_95e6_c0ca5ff2473b.slice/crio-conmon-449137a07bde89a29d343c9ca2e933edaa32cc9d148b0eb7e8b27fab1a35aae1.scope\": RecentStats: unable to find data in memory cache]" Nov 28 11:38:51 crc kubenswrapper[4772]: I1128 11:38:51.891644 4772 generic.go:334] "Generic (PLEG): container finished" podID="a262a424-caf0-4d6e-95e6-c0ca5ff2473b" containerID="449137a07bde89a29d343c9ca2e933edaa32cc9d148b0eb7e8b27fab1a35aae1" exitCode=0 Nov 28 11:38:51 crc kubenswrapper[4772]: I1128 11:38:51.891814 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" event={"ID":"a262a424-caf0-4d6e-95e6-c0ca5ff2473b","Type":"ContainerDied","Data":"449137a07bde89a29d343c9ca2e933edaa32cc9d148b0eb7e8b27fab1a35aae1"} Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.405810 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.562084 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-inventory\") pod \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\" (UID: \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\") " Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.562159 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-ssh-key\") pod \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\" (UID: \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\") " Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.562386 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m79z\" (UniqueName: \"kubernetes.io/projected/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-kube-api-access-9m79z\") pod \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\" (UID: \"a262a424-caf0-4d6e-95e6-c0ca5ff2473b\") " Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.574701 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-kube-api-access-9m79z" (OuterVolumeSpecName: "kube-api-access-9m79z") pod "a262a424-caf0-4d6e-95e6-c0ca5ff2473b" (UID: "a262a424-caf0-4d6e-95e6-c0ca5ff2473b"). InnerVolumeSpecName "kube-api-access-9m79z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.618549 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-inventory" (OuterVolumeSpecName: "inventory") pod "a262a424-caf0-4d6e-95e6-c0ca5ff2473b" (UID: "a262a424-caf0-4d6e-95e6-c0ca5ff2473b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.625560 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a262a424-caf0-4d6e-95e6-c0ca5ff2473b" (UID: "a262a424-caf0-4d6e-95e6-c0ca5ff2473b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.664997 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.665040 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.665058 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m79z\" (UniqueName: \"kubernetes.io/projected/a262a424-caf0-4d6e-95e6-c0ca5ff2473b-kube-api-access-9m79z\") on node \"crc\" DevicePath \"\"" Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.914921 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" event={"ID":"a262a424-caf0-4d6e-95e6-c0ca5ff2473b","Type":"ContainerDied","Data":"6d7c7f7cb756ba6fb113d866ffab7aedcba1ef2999470606bdb118f7c446b9bf"} Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.914969 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d7c7f7cb756ba6fb113d866ffab7aedcba1ef2999470606bdb118f7c446b9bf" Nov 28 11:38:53 crc kubenswrapper[4772]: I1128 11:38:53.915028 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mxktm" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.206151 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp"] Nov 28 11:38:54 crc kubenswrapper[4772]: E1128 11:38:54.207033 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a262a424-caf0-4d6e-95e6-c0ca5ff2473b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.207193 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a262a424-caf0-4d6e-95e6-c0ca5ff2473b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.207682 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a262a424-caf0-4d6e-95e6-c0ca5ff2473b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.208829 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.212776 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.214077 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.214288 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.214423 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.219208 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp"] Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.277752 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/02da511d-7da9-49de-bed3-34ecbf58b864-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp\" (UID: \"02da511d-7da9-49de-bed3-34ecbf58b864\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.277942 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02da511d-7da9-49de-bed3-34ecbf58b864-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp\" (UID: \"02da511d-7da9-49de-bed3-34ecbf58b864\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.278038 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cht2f\" (UniqueName: \"kubernetes.io/projected/02da511d-7da9-49de-bed3-34ecbf58b864-kube-api-access-cht2f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp\" (UID: \"02da511d-7da9-49de-bed3-34ecbf58b864\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.380051 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02da511d-7da9-49de-bed3-34ecbf58b864-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp\" (UID: \"02da511d-7da9-49de-bed3-34ecbf58b864\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.380607 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cht2f\" (UniqueName: \"kubernetes.io/projected/02da511d-7da9-49de-bed3-34ecbf58b864-kube-api-access-cht2f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp\" (UID: \"02da511d-7da9-49de-bed3-34ecbf58b864\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.380719 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/02da511d-7da9-49de-bed3-34ecbf58b864-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp\" (UID: \"02da511d-7da9-49de-bed3-34ecbf58b864\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.386037 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02da511d-7da9-49de-bed3-34ecbf58b864-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp\" (UID: \"02da511d-7da9-49de-bed3-34ecbf58b864\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.387479 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/02da511d-7da9-49de-bed3-34ecbf58b864-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp\" (UID: \"02da511d-7da9-49de-bed3-34ecbf58b864\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.414946 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cht2f\" (UniqueName: \"kubernetes.io/projected/02da511d-7da9-49de-bed3-34ecbf58b864-kube-api-access-cht2f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp\" (UID: \"02da511d-7da9-49de-bed3-34ecbf58b864\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:38:54 crc kubenswrapper[4772]: I1128 11:38:54.544678 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:38:55 crc kubenswrapper[4772]: I1128 11:38:55.227930 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp"] Nov 28 11:38:55 crc kubenswrapper[4772]: W1128 11:38:55.234691 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02da511d_7da9_49de_bed3_34ecbf58b864.slice/crio-0f11eeb667e50a948935b61a16e583f1a9aa54c911acb1a9a5d731f2a17be4b6 WatchSource:0}: Error finding container 0f11eeb667e50a948935b61a16e583f1a9aa54c911acb1a9a5d731f2a17be4b6: Status 404 returned error can't find the container with id 0f11eeb667e50a948935b61a16e583f1a9aa54c911acb1a9a5d731f2a17be4b6 Nov 28 11:38:55 crc kubenswrapper[4772]: I1128 11:38:55.934503 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" event={"ID":"02da511d-7da9-49de-bed3-34ecbf58b864","Type":"ContainerStarted","Data":"0f11eeb667e50a948935b61a16e583f1a9aa54c911acb1a9a5d731f2a17be4b6"} Nov 28 11:38:56 crc kubenswrapper[4772]: I1128 11:38:56.949891 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" event={"ID":"02da511d-7da9-49de-bed3-34ecbf58b864","Type":"ContainerStarted","Data":"1f2758e2c124091b4290ababe61b318d8a3dcb6a5f6677415ca10fa6a5eaf5e7"} Nov 28 11:38:56 crc kubenswrapper[4772]: I1128 11:38:56.977183 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" podStartSLOduration=2.5247997890000002 podStartE2EDuration="2.977156609s" podCreationTimestamp="2025-11-28 11:38:54 +0000 UTC" firstStartedPulling="2025-11-28 11:38:55.237481576 +0000 UTC m=+1933.560724823" lastFinishedPulling="2025-11-28 11:38:55.689838416 +0000 UTC m=+1934.013081643" observedRunningTime="2025-11-28 11:38:56.97495283 +0000 UTC m=+1935.298196087" watchObservedRunningTime="2025-11-28 11:38:56.977156609 +0000 UTC m=+1935.300399866" Nov 28 11:38:56 crc kubenswrapper[4772]: I1128 11:38:56.994749 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:38:56 crc kubenswrapper[4772]: E1128 11:38:56.995411 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:39:06 crc kubenswrapper[4772]: I1128 11:39:06.075454 4772 generic.go:334] "Generic (PLEG): container finished" podID="02da511d-7da9-49de-bed3-34ecbf58b864" containerID="1f2758e2c124091b4290ababe61b318d8a3dcb6a5f6677415ca10fa6a5eaf5e7" exitCode=0 Nov 28 11:39:06 crc kubenswrapper[4772]: I1128 11:39:06.075602 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" event={"ID":"02da511d-7da9-49de-bed3-34ecbf58b864","Type":"ContainerDied","Data":"1f2758e2c124091b4290ababe61b318d8a3dcb6a5f6677415ca10fa6a5eaf5e7"} Nov 28 11:39:07 crc kubenswrapper[4772]: I1128 11:39:07.616756 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:39:07 crc kubenswrapper[4772]: I1128 11:39:07.702606 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cht2f\" (UniqueName: \"kubernetes.io/projected/02da511d-7da9-49de-bed3-34ecbf58b864-kube-api-access-cht2f\") pod \"02da511d-7da9-49de-bed3-34ecbf58b864\" (UID: \"02da511d-7da9-49de-bed3-34ecbf58b864\") " Nov 28 11:39:07 crc kubenswrapper[4772]: I1128 11:39:07.702719 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02da511d-7da9-49de-bed3-34ecbf58b864-inventory\") pod \"02da511d-7da9-49de-bed3-34ecbf58b864\" (UID: \"02da511d-7da9-49de-bed3-34ecbf58b864\") " Nov 28 11:39:07 crc kubenswrapper[4772]: I1128 11:39:07.702878 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/02da511d-7da9-49de-bed3-34ecbf58b864-ssh-key\") pod \"02da511d-7da9-49de-bed3-34ecbf58b864\" (UID: \"02da511d-7da9-49de-bed3-34ecbf58b864\") " Nov 28 11:39:07 crc kubenswrapper[4772]: I1128 11:39:07.709052 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02da511d-7da9-49de-bed3-34ecbf58b864-kube-api-access-cht2f" (OuterVolumeSpecName: "kube-api-access-cht2f") pod "02da511d-7da9-49de-bed3-34ecbf58b864" (UID: "02da511d-7da9-49de-bed3-34ecbf58b864"). InnerVolumeSpecName "kube-api-access-cht2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:39:07 crc kubenswrapper[4772]: I1128 11:39:07.743971 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02da511d-7da9-49de-bed3-34ecbf58b864-inventory" (OuterVolumeSpecName: "inventory") pod "02da511d-7da9-49de-bed3-34ecbf58b864" (UID: "02da511d-7da9-49de-bed3-34ecbf58b864"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:39:07 crc kubenswrapper[4772]: I1128 11:39:07.755433 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02da511d-7da9-49de-bed3-34ecbf58b864-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "02da511d-7da9-49de-bed3-34ecbf58b864" (UID: "02da511d-7da9-49de-bed3-34ecbf58b864"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:39:07 crc kubenswrapper[4772]: I1128 11:39:07.805933 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cht2f\" (UniqueName: \"kubernetes.io/projected/02da511d-7da9-49de-bed3-34ecbf58b864-kube-api-access-cht2f\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:07 crc kubenswrapper[4772]: I1128 11:39:07.806277 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02da511d-7da9-49de-bed3-34ecbf58b864-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:07 crc kubenswrapper[4772]: I1128 11:39:07.806401 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/02da511d-7da9-49de-bed3-34ecbf58b864-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.104972 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" event={"ID":"02da511d-7da9-49de-bed3-34ecbf58b864","Type":"ContainerDied","Data":"0f11eeb667e50a948935b61a16e583f1a9aa54c911acb1a9a5d731f2a17be4b6"} Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.105040 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f11eeb667e50a948935b61a16e583f1a9aa54c911acb1a9a5d731f2a17be4b6" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.105051 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.311499 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn"] Nov 28 11:39:08 crc kubenswrapper[4772]: E1128 11:39:08.312527 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02da511d-7da9-49de-bed3-34ecbf58b864" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.312620 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="02da511d-7da9-49de-bed3-34ecbf58b864" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.312927 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="02da511d-7da9-49de-bed3-34ecbf58b864" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.313765 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.316024 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.317550 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.317616 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.317549 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.317915 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.318001 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.321707 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.321784 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.337889 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn"] Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.419220 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.419392 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.419459 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.419552 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.419601 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.419644 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.419715 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.419805 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.419862 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.419893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4qjg\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-kube-api-access-h4qjg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.419928 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.419957 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.420006 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.420069 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.521751 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.521791 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4qjg\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-kube-api-access-h4qjg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.521822 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.521845 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.521867 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.521895 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.521924 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.521961 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.522002 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.522042 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.522064 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.522084 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.522115 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.522151 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.529865 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.530044 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.530142 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.530495 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.530650 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.531006 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.531662 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.532110 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.532183 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.533568 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.533968 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.535026 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.537017 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.538927 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4qjg\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-kube-api-access-h4qjg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-94trn\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:08 crc kubenswrapper[4772]: I1128 11:39:08.639744 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:09 crc kubenswrapper[4772]: I1128 11:39:09.097582 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn"] Nov 28 11:39:09 crc kubenswrapper[4772]: I1128 11:39:09.120553 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" event={"ID":"99b393cd-c63f-4007-9cb8-26a6e5710794","Type":"ContainerStarted","Data":"ea0e5c79b6a67380561e289a575c4fd4961efbb97e2d901de45036bda44a7cf1"} Nov 28 11:39:10 crc kubenswrapper[4772]: I1128 11:39:10.132684 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" event={"ID":"99b393cd-c63f-4007-9cb8-26a6e5710794","Type":"ContainerStarted","Data":"2b80bd42f6d3000894c8f865bdd74c2ff6e830ce0178ec9af744fbcd22187a04"} Nov 28 11:39:10 crc kubenswrapper[4772]: I1128 11:39:10.163281 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" podStartSLOduration=1.501068393 podStartE2EDuration="2.163241996s" podCreationTimestamp="2025-11-28 11:39:08 +0000 UTC" firstStartedPulling="2025-11-28 11:39:09.100642157 +0000 UTC m=+1947.423885394" lastFinishedPulling="2025-11-28 11:39:09.76281577 +0000 UTC m=+1948.086058997" observedRunningTime="2025-11-28 11:39:10.156419641 +0000 UTC m=+1948.479662868" watchObservedRunningTime="2025-11-28 11:39:10.163241996 +0000 UTC m=+1948.486485223" Nov 28 11:39:10 crc kubenswrapper[4772]: I1128 11:39:10.994744 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:39:10 crc kubenswrapper[4772]: E1128 11:39:10.995663 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:39:22 crc kubenswrapper[4772]: I1128 11:39:22.006614 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:39:22 crc kubenswrapper[4772]: E1128 11:39:22.007343 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:39:35 crc kubenswrapper[4772]: I1128 11:39:35.994931 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:39:35 crc kubenswrapper[4772]: E1128 11:39:35.996564 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:39:50 crc kubenswrapper[4772]: I1128 11:39:50.995008 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:39:50 crc kubenswrapper[4772]: E1128 11:39:50.996057 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:39:53 crc kubenswrapper[4772]: I1128 11:39:53.623729 4772 generic.go:334] "Generic (PLEG): container finished" podID="99b393cd-c63f-4007-9cb8-26a6e5710794" containerID="2b80bd42f6d3000894c8f865bdd74c2ff6e830ce0178ec9af744fbcd22187a04" exitCode=0 Nov 28 11:39:53 crc kubenswrapper[4772]: I1128 11:39:53.623823 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" event={"ID":"99b393cd-c63f-4007-9cb8-26a6e5710794","Type":"ContainerDied","Data":"2b80bd42f6d3000894c8f865bdd74c2ff6e830ce0178ec9af744fbcd22187a04"} Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.155937 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.248392 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-ovn-default-certs-0\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.248525 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-ovn-combined-ca-bundle\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.248621 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-nova-combined-ca-bundle\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.248738 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.248826 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-telemetry-combined-ca-bundle\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.248867 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-bootstrap-combined-ca-bundle\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.248909 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-neutron-metadata-combined-ca-bundle\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.248994 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-repo-setup-combined-ca-bundle\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.249040 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-ssh-key\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.249141 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-libvirt-combined-ca-bundle\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.249301 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4qjg\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-kube-api-access-h4qjg\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.249431 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.249491 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.250965 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-inventory\") pod \"99b393cd-c63f-4007-9cb8-26a6e5710794\" (UID: \"99b393cd-c63f-4007-9cb8-26a6e5710794\") " Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.257835 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.259511 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.260868 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.261014 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.261520 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.263730 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.264085 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.266161 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-kube-api-access-h4qjg" (OuterVolumeSpecName: "kube-api-access-h4qjg") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "kube-api-access-h4qjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.269058 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.269129 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.270514 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.282005 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.299635 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.300019 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-inventory" (OuterVolumeSpecName: "inventory") pod "99b393cd-c63f-4007-9cb8-26a6e5710794" (UID: "99b393cd-c63f-4007-9cb8-26a6e5710794"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355089 4772 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355149 4772 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355171 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355188 4772 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355204 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355216 4772 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355226 4772 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355242 4772 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355260 4772 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355273 4772 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355287 4772 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355302 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355319 4772 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99b393cd-c63f-4007-9cb8-26a6e5710794-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.355333 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4qjg\" (UniqueName: \"kubernetes.io/projected/99b393cd-c63f-4007-9cb8-26a6e5710794-kube-api-access-h4qjg\") on node \"crc\" DevicePath \"\"" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.649405 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" event={"ID":"99b393cd-c63f-4007-9cb8-26a6e5710794","Type":"ContainerDied","Data":"ea0e5c79b6a67380561e289a575c4fd4961efbb97e2d901de45036bda44a7cf1"} Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.649973 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea0e5c79b6a67380561e289a575c4fd4961efbb97e2d901de45036bda44a7cf1" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.649490 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-94trn" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.802487 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4"] Nov 28 11:39:55 crc kubenswrapper[4772]: E1128 11:39:55.804270 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99b393cd-c63f-4007-9cb8-26a6e5710794" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.804395 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="99b393cd-c63f-4007-9cb8-26a6e5710794" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.804753 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="99b393cd-c63f-4007-9cb8-26a6e5710794" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.805720 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.809991 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.810171 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.810393 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.810579 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.810755 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.831191 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4"] Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.868664 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9b6d18b1-abcf-4c1f-b811-6df572185255-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.868751 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.868810 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlf7\" (UniqueName: \"kubernetes.io/projected/9b6d18b1-abcf-4c1f-b811-6df572185255-kube-api-access-qmlf7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.869588 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.869710 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.972121 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9b6d18b1-abcf-4c1f-b811-6df572185255-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.972230 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.972268 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmlf7\" (UniqueName: \"kubernetes.io/projected/9b6d18b1-abcf-4c1f-b811-6df572185255-kube-api-access-qmlf7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.972412 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.972450 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.974003 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9b6d18b1-abcf-4c1f-b811-6df572185255-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.978098 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.979229 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.980725 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:55 crc kubenswrapper[4772]: I1128 11:39:55.991902 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmlf7\" (UniqueName: \"kubernetes.io/projected/9b6d18b1-abcf-4c1f-b811-6df572185255-kube-api-access-qmlf7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lxg4\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:56 crc kubenswrapper[4772]: I1128 11:39:56.133040 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:39:56 crc kubenswrapper[4772]: I1128 11:39:56.765385 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4"] Nov 28 11:39:57 crc kubenswrapper[4772]: I1128 11:39:57.675094 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" event={"ID":"9b6d18b1-abcf-4c1f-b811-6df572185255","Type":"ContainerStarted","Data":"052a01efbefb1185c2590beca552ffdee1f5092ed25aab64e162009d8515121e"} Nov 28 11:39:57 crc kubenswrapper[4772]: I1128 11:39:57.675587 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" event={"ID":"9b6d18b1-abcf-4c1f-b811-6df572185255","Type":"ContainerStarted","Data":"520f7138fa747fd7a557f42bd3fefbfdae66f51759b5a5ba59880b7e5c28e76d"} Nov 28 11:39:57 crc kubenswrapper[4772]: I1128 11:39:57.703262 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" podStartSLOduration=2.164413284 podStartE2EDuration="2.70323301s" podCreationTimestamp="2025-11-28 11:39:55 +0000 UTC" firstStartedPulling="2025-11-28 11:39:56.781321286 +0000 UTC m=+1995.104564523" lastFinishedPulling="2025-11-28 11:39:57.320141022 +0000 UTC m=+1995.643384249" observedRunningTime="2025-11-28 11:39:57.695778598 +0000 UTC m=+1996.019021835" watchObservedRunningTime="2025-11-28 11:39:57.70323301 +0000 UTC m=+1996.026476277" Nov 28 11:40:04 crc kubenswrapper[4772]: I1128 11:40:04.993739 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:40:05 crc kubenswrapper[4772]: I1128 11:40:05.774887 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"99211b40d876d871be4ab3485a6ae32012cb5b156fc3de5699bcb68b9f4c3d94"} Nov 28 11:41:06 crc kubenswrapper[4772]: I1128 11:41:06.456165 4772 generic.go:334] "Generic (PLEG): container finished" podID="9b6d18b1-abcf-4c1f-b811-6df572185255" containerID="052a01efbefb1185c2590beca552ffdee1f5092ed25aab64e162009d8515121e" exitCode=0 Nov 28 11:41:06 crc kubenswrapper[4772]: I1128 11:41:06.456320 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" event={"ID":"9b6d18b1-abcf-4c1f-b811-6df572185255","Type":"ContainerDied","Data":"052a01efbefb1185c2590beca552ffdee1f5092ed25aab64e162009d8515121e"} Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.068548 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.114209 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-ovn-combined-ca-bundle\") pod \"9b6d18b1-abcf-4c1f-b811-6df572185255\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.114335 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-ssh-key\") pod \"9b6d18b1-abcf-4c1f-b811-6df572185255\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.114471 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmlf7\" (UniqueName: \"kubernetes.io/projected/9b6d18b1-abcf-4c1f-b811-6df572185255-kube-api-access-qmlf7\") pod \"9b6d18b1-abcf-4c1f-b811-6df572185255\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.114530 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9b6d18b1-abcf-4c1f-b811-6df572185255-ovncontroller-config-0\") pod \"9b6d18b1-abcf-4c1f-b811-6df572185255\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.114638 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-inventory\") pod \"9b6d18b1-abcf-4c1f-b811-6df572185255\" (UID: \"9b6d18b1-abcf-4c1f-b811-6df572185255\") " Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.122555 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9b6d18b1-abcf-4c1f-b811-6df572185255" (UID: "9b6d18b1-abcf-4c1f-b811-6df572185255"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.122941 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b6d18b1-abcf-4c1f-b811-6df572185255-kube-api-access-qmlf7" (OuterVolumeSpecName: "kube-api-access-qmlf7") pod "9b6d18b1-abcf-4c1f-b811-6df572185255" (UID: "9b6d18b1-abcf-4c1f-b811-6df572185255"). InnerVolumeSpecName "kube-api-access-qmlf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.145240 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9b6d18b1-abcf-4c1f-b811-6df572185255" (UID: "9b6d18b1-abcf-4c1f-b811-6df572185255"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.146112 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6d18b1-abcf-4c1f-b811-6df572185255-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "9b6d18b1-abcf-4c1f-b811-6df572185255" (UID: "9b6d18b1-abcf-4c1f-b811-6df572185255"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.162229 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-inventory" (OuterVolumeSpecName: "inventory") pod "9b6d18b1-abcf-4c1f-b811-6df572185255" (UID: "9b6d18b1-abcf-4c1f-b811-6df572185255"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.218437 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.218479 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.218492 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmlf7\" (UniqueName: \"kubernetes.io/projected/9b6d18b1-abcf-4c1f-b811-6df572185255-kube-api-access-qmlf7\") on node \"crc\" DevicePath \"\"" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.218505 4772 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9b6d18b1-abcf-4c1f-b811-6df572185255-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.218518 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b6d18b1-abcf-4c1f-b811-6df572185255-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.507318 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" event={"ID":"9b6d18b1-abcf-4c1f-b811-6df572185255","Type":"ContainerDied","Data":"520f7138fa747fd7a557f42bd3fefbfdae66f51759b5a5ba59880b7e5c28e76d"} Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.507414 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="520f7138fa747fd7a557f42bd3fefbfdae66f51759b5a5ba59880b7e5c28e76d" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.507532 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lxg4" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.593724 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg"] Nov 28 11:41:08 crc kubenswrapper[4772]: E1128 11:41:08.594193 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6d18b1-abcf-4c1f-b811-6df572185255" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.594214 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6d18b1-abcf-4c1f-b811-6df572185255" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.594495 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b6d18b1-abcf-4c1f-b811-6df572185255" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.595421 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.599040 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.599670 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.599707 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.599877 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.599935 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.600003 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.616051 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg"] Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.626121 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.626220 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.626260 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hdsq\" (UniqueName: \"kubernetes.io/projected/49725426-9a39-40a3-8921-dadd52884a4a-kube-api-access-8hdsq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.626314 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.626409 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.626499 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.728420 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.728593 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.728684 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.728738 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.728771 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hdsq\" (UniqueName: \"kubernetes.io/projected/49725426-9a39-40a3-8921-dadd52884a4a-kube-api-access-8hdsq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.728822 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.735473 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.735588 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.735702 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.736581 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.737714 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.753223 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hdsq\" (UniqueName: \"kubernetes.io/projected/49725426-9a39-40a3-8921-dadd52884a4a-kube-api-access-8hdsq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:08 crc kubenswrapper[4772]: I1128 11:41:08.912935 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:41:09 crc kubenswrapper[4772]: I1128 11:41:09.509018 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg"] Nov 28 11:41:09 crc kubenswrapper[4772]: I1128 11:41:09.526174 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" event={"ID":"49725426-9a39-40a3-8921-dadd52884a4a","Type":"ContainerStarted","Data":"719ebca16a3dbac00b33eaac8a4e169e34746f3091c2598e94776db80d5cb9a2"} Nov 28 11:41:10 crc kubenswrapper[4772]: I1128 11:41:10.541285 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" event={"ID":"49725426-9a39-40a3-8921-dadd52884a4a","Type":"ContainerStarted","Data":"2d639fa37204c6f9f622d15d897f980951acf0817dfaf41a1a34f9c4469e0599"} Nov 28 11:41:10 crc kubenswrapper[4772]: I1128 11:41:10.563723 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" podStartSLOduration=2.080065617 podStartE2EDuration="2.563696372s" podCreationTimestamp="2025-11-28 11:41:08 +0000 UTC" firstStartedPulling="2025-11-28 11:41:09.509999905 +0000 UTC m=+2067.833243142" lastFinishedPulling="2025-11-28 11:41:09.99363064 +0000 UTC m=+2068.316873897" observedRunningTime="2025-11-28 11:41:10.559331204 +0000 UTC m=+2068.882574441" watchObservedRunningTime="2025-11-28 11:41:10.563696372 +0000 UTC m=+2068.886939599" Nov 28 11:42:02 crc kubenswrapper[4772]: I1128 11:42:02.182867 4772 generic.go:334] "Generic (PLEG): container finished" podID="49725426-9a39-40a3-8921-dadd52884a4a" containerID="2d639fa37204c6f9f622d15d897f980951acf0817dfaf41a1a34f9c4469e0599" exitCode=0 Nov 28 11:42:02 crc kubenswrapper[4772]: I1128 11:42:02.182934 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" event={"ID":"49725426-9a39-40a3-8921-dadd52884a4a","Type":"ContainerDied","Data":"2d639fa37204c6f9f622d15d897f980951acf0817dfaf41a1a34f9c4469e0599"} Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.671225 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.762574 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-nova-metadata-neutron-config-0\") pod \"49725426-9a39-40a3-8921-dadd52884a4a\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.763016 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"49725426-9a39-40a3-8921-dadd52884a4a\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.763216 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hdsq\" (UniqueName: \"kubernetes.io/projected/49725426-9a39-40a3-8921-dadd52884a4a-kube-api-access-8hdsq\") pod \"49725426-9a39-40a3-8921-dadd52884a4a\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.763640 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-ssh-key\") pod \"49725426-9a39-40a3-8921-dadd52884a4a\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.763805 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-neutron-metadata-combined-ca-bundle\") pod \"49725426-9a39-40a3-8921-dadd52884a4a\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.764036 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-inventory\") pod \"49725426-9a39-40a3-8921-dadd52884a4a\" (UID: \"49725426-9a39-40a3-8921-dadd52884a4a\") " Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.769322 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49725426-9a39-40a3-8921-dadd52884a4a-kube-api-access-8hdsq" (OuterVolumeSpecName: "kube-api-access-8hdsq") pod "49725426-9a39-40a3-8921-dadd52884a4a" (UID: "49725426-9a39-40a3-8921-dadd52884a4a"). InnerVolumeSpecName "kube-api-access-8hdsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.777049 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "49725426-9a39-40a3-8921-dadd52884a4a" (UID: "49725426-9a39-40a3-8921-dadd52884a4a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.791686 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "49725426-9a39-40a3-8921-dadd52884a4a" (UID: "49725426-9a39-40a3-8921-dadd52884a4a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.793195 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-inventory" (OuterVolumeSpecName: "inventory") pod "49725426-9a39-40a3-8921-dadd52884a4a" (UID: "49725426-9a39-40a3-8921-dadd52884a4a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.794919 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "49725426-9a39-40a3-8921-dadd52884a4a" (UID: "49725426-9a39-40a3-8921-dadd52884a4a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.805374 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "49725426-9a39-40a3-8921-dadd52884a4a" (UID: "49725426-9a39-40a3-8921-dadd52884a4a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.869824 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hdsq\" (UniqueName: \"kubernetes.io/projected/49725426-9a39-40a3-8921-dadd52884a4a-kube-api-access-8hdsq\") on node \"crc\" DevicePath \"\"" Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.870217 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.870231 4772 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.870257 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.870274 4772 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:42:03 crc kubenswrapper[4772]: I1128 11:42:03.870302 4772 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/49725426-9a39-40a3-8921-dadd52884a4a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.210757 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" event={"ID":"49725426-9a39-40a3-8921-dadd52884a4a","Type":"ContainerDied","Data":"719ebca16a3dbac00b33eaac8a4e169e34746f3091c2598e94776db80d5cb9a2"} Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.210805 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="719ebca16a3dbac00b33eaac8a4e169e34746f3091c2598e94776db80d5cb9a2" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.210927 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.320430 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp"] Nov 28 11:42:04 crc kubenswrapper[4772]: E1128 11:42:04.320980 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49725426-9a39-40a3-8921-dadd52884a4a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.321013 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="49725426-9a39-40a3-8921-dadd52884a4a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.321295 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="49725426-9a39-40a3-8921-dadd52884a4a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.322169 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.324482 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.325050 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.326333 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.326415 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.326333 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.332516 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp"] Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.379810 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.379958 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.380082 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7sz\" (UniqueName: \"kubernetes.io/projected/0b865b7c-a1c7-4f0b-b289-d980f76a946d-kube-api-access-ch7sz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.382635 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.382780 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.485164 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.485262 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.485322 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch7sz\" (UniqueName: \"kubernetes.io/projected/0b865b7c-a1c7-4f0b-b289-d980f76a946d-kube-api-access-ch7sz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.485443 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.485545 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.499370 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.499446 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.499610 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.499828 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.506090 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch7sz\" (UniqueName: \"kubernetes.io/projected/0b865b7c-a1c7-4f0b-b289-d980f76a946d-kube-api-access-ch7sz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:04 crc kubenswrapper[4772]: I1128 11:42:04.674254 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:42:05 crc kubenswrapper[4772]: I1128 11:42:05.260187 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp"] Nov 28 11:42:06 crc kubenswrapper[4772]: I1128 11:42:06.238435 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" event={"ID":"0b865b7c-a1c7-4f0b-b289-d980f76a946d","Type":"ContainerStarted","Data":"79a8368939a562c1458df4fcf793a29924c915d98946e156309773f33474b7ae"} Nov 28 11:42:07 crc kubenswrapper[4772]: I1128 11:42:07.250792 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" event={"ID":"0b865b7c-a1c7-4f0b-b289-d980f76a946d","Type":"ContainerStarted","Data":"8e11f93450ffcd9f28a9042f2f39a57638923c88d4ee9527e4e68c3021c485e2"} Nov 28 11:42:07 crc kubenswrapper[4772]: I1128 11:42:07.278672 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" podStartSLOduration=2.44163336 podStartE2EDuration="3.27864759s" podCreationTimestamp="2025-11-28 11:42:04 +0000 UTC" firstStartedPulling="2025-11-28 11:42:05.271180457 +0000 UTC m=+2123.594423694" lastFinishedPulling="2025-11-28 11:42:06.108194667 +0000 UTC m=+2124.431437924" observedRunningTime="2025-11-28 11:42:07.273332846 +0000 UTC m=+2125.596576083" watchObservedRunningTime="2025-11-28 11:42:07.27864759 +0000 UTC m=+2125.601890817" Nov 28 11:42:23 crc kubenswrapper[4772]: I1128 11:42:23.896985 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:42:23 crc kubenswrapper[4772]: I1128 11:42:23.897702 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.081434 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-njtfq"] Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.084077 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.091023 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njtfq"] Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.204017 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6984d3-4637-4855-8ca2-18004c0e04d6-catalog-content\") pod \"certified-operators-njtfq\" (UID: \"ba6984d3-4637-4855-8ca2-18004c0e04d6\") " pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.204170 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6984d3-4637-4855-8ca2-18004c0e04d6-utilities\") pod \"certified-operators-njtfq\" (UID: \"ba6984d3-4637-4855-8ca2-18004c0e04d6\") " pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.204480 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95hsq\" (UniqueName: \"kubernetes.io/projected/ba6984d3-4637-4855-8ca2-18004c0e04d6-kube-api-access-95hsq\") pod \"certified-operators-njtfq\" (UID: \"ba6984d3-4637-4855-8ca2-18004c0e04d6\") " pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.306821 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95hsq\" (UniqueName: \"kubernetes.io/projected/ba6984d3-4637-4855-8ca2-18004c0e04d6-kube-api-access-95hsq\") pod \"certified-operators-njtfq\" (UID: \"ba6984d3-4637-4855-8ca2-18004c0e04d6\") " pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.306903 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6984d3-4637-4855-8ca2-18004c0e04d6-catalog-content\") pod \"certified-operators-njtfq\" (UID: \"ba6984d3-4637-4855-8ca2-18004c0e04d6\") " pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.306947 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6984d3-4637-4855-8ca2-18004c0e04d6-utilities\") pod \"certified-operators-njtfq\" (UID: \"ba6984d3-4637-4855-8ca2-18004c0e04d6\") " pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.307431 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6984d3-4637-4855-8ca2-18004c0e04d6-catalog-content\") pod \"certified-operators-njtfq\" (UID: \"ba6984d3-4637-4855-8ca2-18004c0e04d6\") " pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.307545 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6984d3-4637-4855-8ca2-18004c0e04d6-utilities\") pod \"certified-operators-njtfq\" (UID: \"ba6984d3-4637-4855-8ca2-18004c0e04d6\") " pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.326794 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95hsq\" (UniqueName: \"kubernetes.io/projected/ba6984d3-4637-4855-8ca2-18004c0e04d6-kube-api-access-95hsq\") pod \"certified-operators-njtfq\" (UID: \"ba6984d3-4637-4855-8ca2-18004c0e04d6\") " pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:41 crc kubenswrapper[4772]: I1128 11:42:41.426199 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:42 crc kubenswrapper[4772]: I1128 11:42:42.022047 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njtfq"] Nov 28 11:42:42 crc kubenswrapper[4772]: I1128 11:42:42.710195 4772 generic.go:334] "Generic (PLEG): container finished" podID="ba6984d3-4637-4855-8ca2-18004c0e04d6" containerID="7557de531c986fafa9eefce63a3c45540e21536bfc0d5f0c47901a6276bb6cdf" exitCode=0 Nov 28 11:42:42 crc kubenswrapper[4772]: I1128 11:42:42.710322 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njtfq" event={"ID":"ba6984d3-4637-4855-8ca2-18004c0e04d6","Type":"ContainerDied","Data":"7557de531c986fafa9eefce63a3c45540e21536bfc0d5f0c47901a6276bb6cdf"} Nov 28 11:42:42 crc kubenswrapper[4772]: I1128 11:42:42.711492 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njtfq" event={"ID":"ba6984d3-4637-4855-8ca2-18004c0e04d6","Type":"ContainerStarted","Data":"46c9e81d3c04a80cbab5412c496ea14e2397ad3a62276b4961ce95e8075775e4"} Nov 28 11:42:44 crc kubenswrapper[4772]: I1128 11:42:44.736639 4772 generic.go:334] "Generic (PLEG): container finished" podID="ba6984d3-4637-4855-8ca2-18004c0e04d6" containerID="8ae6072e2bfbb6d6ef8dda4353f14d6c8351765ac65c02ecce8740aaf54c7a41" exitCode=0 Nov 28 11:42:44 crc kubenswrapper[4772]: I1128 11:42:44.737441 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njtfq" event={"ID":"ba6984d3-4637-4855-8ca2-18004c0e04d6","Type":"ContainerDied","Data":"8ae6072e2bfbb6d6ef8dda4353f14d6c8351765ac65c02ecce8740aaf54c7a41"} Nov 28 11:42:45 crc kubenswrapper[4772]: I1128 11:42:45.750422 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njtfq" event={"ID":"ba6984d3-4637-4855-8ca2-18004c0e04d6","Type":"ContainerStarted","Data":"c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1"} Nov 28 11:42:45 crc kubenswrapper[4772]: I1128 11:42:45.775200 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-njtfq" podStartSLOduration=2.138550626 podStartE2EDuration="4.775171927s" podCreationTimestamp="2025-11-28 11:42:41 +0000 UTC" firstStartedPulling="2025-11-28 11:42:42.712765012 +0000 UTC m=+2161.036008259" lastFinishedPulling="2025-11-28 11:42:45.349386333 +0000 UTC m=+2163.672629560" observedRunningTime="2025-11-28 11:42:45.769534094 +0000 UTC m=+2164.092777351" watchObservedRunningTime="2025-11-28 11:42:45.775171927 +0000 UTC m=+2164.098415154" Nov 28 11:42:51 crc kubenswrapper[4772]: I1128 11:42:51.426590 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:51 crc kubenswrapper[4772]: I1128 11:42:51.426996 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:51 crc kubenswrapper[4772]: I1128 11:42:51.524461 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:51 crc kubenswrapper[4772]: I1128 11:42:51.924310 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:51 crc kubenswrapper[4772]: I1128 11:42:51.989001 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njtfq"] Nov 28 11:42:53 crc kubenswrapper[4772]: I1128 11:42:53.880521 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-njtfq" podUID="ba6984d3-4637-4855-8ca2-18004c0e04d6" containerName="registry-server" containerID="cri-o://c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1" gracePeriod=2 Nov 28 11:42:53 crc kubenswrapper[4772]: I1128 11:42:53.896972 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:42:53 crc kubenswrapper[4772]: I1128 11:42:53.897056 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.427067 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.535446 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6984d3-4637-4855-8ca2-18004c0e04d6-catalog-content\") pod \"ba6984d3-4637-4855-8ca2-18004c0e04d6\" (UID: \"ba6984d3-4637-4855-8ca2-18004c0e04d6\") " Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.535766 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95hsq\" (UniqueName: \"kubernetes.io/projected/ba6984d3-4637-4855-8ca2-18004c0e04d6-kube-api-access-95hsq\") pod \"ba6984d3-4637-4855-8ca2-18004c0e04d6\" (UID: \"ba6984d3-4637-4855-8ca2-18004c0e04d6\") " Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.535818 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6984d3-4637-4855-8ca2-18004c0e04d6-utilities\") pod \"ba6984d3-4637-4855-8ca2-18004c0e04d6\" (UID: \"ba6984d3-4637-4855-8ca2-18004c0e04d6\") " Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.537013 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba6984d3-4637-4855-8ca2-18004c0e04d6-utilities" (OuterVolumeSpecName: "utilities") pod "ba6984d3-4637-4855-8ca2-18004c0e04d6" (UID: "ba6984d3-4637-4855-8ca2-18004c0e04d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.544504 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba6984d3-4637-4855-8ca2-18004c0e04d6-kube-api-access-95hsq" (OuterVolumeSpecName: "kube-api-access-95hsq") pod "ba6984d3-4637-4855-8ca2-18004c0e04d6" (UID: "ba6984d3-4637-4855-8ca2-18004c0e04d6"). InnerVolumeSpecName "kube-api-access-95hsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.593405 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba6984d3-4637-4855-8ca2-18004c0e04d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba6984d3-4637-4855-8ca2-18004c0e04d6" (UID: "ba6984d3-4637-4855-8ca2-18004c0e04d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.638305 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6984d3-4637-4855-8ca2-18004c0e04d6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.638382 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95hsq\" (UniqueName: \"kubernetes.io/projected/ba6984d3-4637-4855-8ca2-18004c0e04d6-kube-api-access-95hsq\") on node \"crc\" DevicePath \"\"" Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.638405 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6984d3-4637-4855-8ca2-18004c0e04d6-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.893414 4772 generic.go:334] "Generic (PLEG): container finished" podID="ba6984d3-4637-4855-8ca2-18004c0e04d6" containerID="c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1" exitCode=0 Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.893501 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njtfq" event={"ID":"ba6984d3-4637-4855-8ca2-18004c0e04d6","Type":"ContainerDied","Data":"c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1"} Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.893534 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njtfq" Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.893570 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njtfq" event={"ID":"ba6984d3-4637-4855-8ca2-18004c0e04d6","Type":"ContainerDied","Data":"46c9e81d3c04a80cbab5412c496ea14e2397ad3a62276b4961ce95e8075775e4"} Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.893595 4772 scope.go:117] "RemoveContainer" containerID="c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1" Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.928761 4772 scope.go:117] "RemoveContainer" containerID="8ae6072e2bfbb6d6ef8dda4353f14d6c8351765ac65c02ecce8740aaf54c7a41" Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.941273 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njtfq"] Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.954200 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-njtfq"] Nov 28 11:42:54 crc kubenswrapper[4772]: I1128 11:42:54.962137 4772 scope.go:117] "RemoveContainer" containerID="7557de531c986fafa9eefce63a3c45540e21536bfc0d5f0c47901a6276bb6cdf" Nov 28 11:42:55 crc kubenswrapper[4772]: I1128 11:42:55.019126 4772 scope.go:117] "RemoveContainer" containerID="c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1" Nov 28 11:42:55 crc kubenswrapper[4772]: E1128 11:42:55.019886 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1\": container with ID starting with c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1 not found: ID does not exist" containerID="c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1" Nov 28 11:42:55 crc kubenswrapper[4772]: I1128 11:42:55.019935 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1"} err="failed to get container status \"c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1\": rpc error: code = NotFound desc = could not find container \"c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1\": container with ID starting with c2de96e4a2a7d0794fe6f1afff82040b7ede2b75ba4fba6775d58000ebae2dc1 not found: ID does not exist" Nov 28 11:42:55 crc kubenswrapper[4772]: I1128 11:42:55.019963 4772 scope.go:117] "RemoveContainer" containerID="8ae6072e2bfbb6d6ef8dda4353f14d6c8351765ac65c02ecce8740aaf54c7a41" Nov 28 11:42:55 crc kubenswrapper[4772]: E1128 11:42:55.020583 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ae6072e2bfbb6d6ef8dda4353f14d6c8351765ac65c02ecce8740aaf54c7a41\": container with ID starting with 8ae6072e2bfbb6d6ef8dda4353f14d6c8351765ac65c02ecce8740aaf54c7a41 not found: ID does not exist" containerID="8ae6072e2bfbb6d6ef8dda4353f14d6c8351765ac65c02ecce8740aaf54c7a41" Nov 28 11:42:55 crc kubenswrapper[4772]: I1128 11:42:55.020640 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ae6072e2bfbb6d6ef8dda4353f14d6c8351765ac65c02ecce8740aaf54c7a41"} err="failed to get container status \"8ae6072e2bfbb6d6ef8dda4353f14d6c8351765ac65c02ecce8740aaf54c7a41\": rpc error: code = NotFound desc = could not find container \"8ae6072e2bfbb6d6ef8dda4353f14d6c8351765ac65c02ecce8740aaf54c7a41\": container with ID starting with 8ae6072e2bfbb6d6ef8dda4353f14d6c8351765ac65c02ecce8740aaf54c7a41 not found: ID does not exist" Nov 28 11:42:55 crc kubenswrapper[4772]: I1128 11:42:55.020677 4772 scope.go:117] "RemoveContainer" containerID="7557de531c986fafa9eefce63a3c45540e21536bfc0d5f0c47901a6276bb6cdf" Nov 28 11:42:55 crc kubenswrapper[4772]: E1128 11:42:55.021305 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7557de531c986fafa9eefce63a3c45540e21536bfc0d5f0c47901a6276bb6cdf\": container with ID starting with 7557de531c986fafa9eefce63a3c45540e21536bfc0d5f0c47901a6276bb6cdf not found: ID does not exist" containerID="7557de531c986fafa9eefce63a3c45540e21536bfc0d5f0c47901a6276bb6cdf" Nov 28 11:42:55 crc kubenswrapper[4772]: I1128 11:42:55.021390 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7557de531c986fafa9eefce63a3c45540e21536bfc0d5f0c47901a6276bb6cdf"} err="failed to get container status \"7557de531c986fafa9eefce63a3c45540e21536bfc0d5f0c47901a6276bb6cdf\": rpc error: code = NotFound desc = could not find container \"7557de531c986fafa9eefce63a3c45540e21536bfc0d5f0c47901a6276bb6cdf\": container with ID starting with 7557de531c986fafa9eefce63a3c45540e21536bfc0d5f0c47901a6276bb6cdf not found: ID does not exist" Nov 28 11:42:56 crc kubenswrapper[4772]: I1128 11:42:56.009838 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba6984d3-4637-4855-8ca2-18004c0e04d6" path="/var/lib/kubelet/pods/ba6984d3-4637-4855-8ca2-18004c0e04d6/volumes" Nov 28 11:43:23 crc kubenswrapper[4772]: I1128 11:43:23.897063 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:43:23 crc kubenswrapper[4772]: I1128 11:43:23.898650 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:43:23 crc kubenswrapper[4772]: I1128 11:43:23.898756 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:43:23 crc kubenswrapper[4772]: I1128 11:43:23.899921 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99211b40d876d871be4ab3485a6ae32012cb5b156fc3de5699bcb68b9f4c3d94"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:43:23 crc kubenswrapper[4772]: I1128 11:43:23.900024 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://99211b40d876d871be4ab3485a6ae32012cb5b156fc3de5699bcb68b9f4c3d94" gracePeriod=600 Nov 28 11:43:24 crc kubenswrapper[4772]: I1128 11:43:24.237167 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="99211b40d876d871be4ab3485a6ae32012cb5b156fc3de5699bcb68b9f4c3d94" exitCode=0 Nov 28 11:43:24 crc kubenswrapper[4772]: I1128 11:43:24.237236 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"99211b40d876d871be4ab3485a6ae32012cb5b156fc3de5699bcb68b9f4c3d94"} Nov 28 11:43:24 crc kubenswrapper[4772]: I1128 11:43:24.237676 4772 scope.go:117] "RemoveContainer" containerID="095da9301917f78882c2d641f05b5232d289d344b277de72dec42ccc768c0c4f" Nov 28 11:43:25 crc kubenswrapper[4772]: I1128 11:43:25.251122 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522"} Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.236444 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vs7j8"] Nov 28 11:44:36 crc kubenswrapper[4772]: E1128 11:44:36.237759 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6984d3-4637-4855-8ca2-18004c0e04d6" containerName="extract-utilities" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.237781 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6984d3-4637-4855-8ca2-18004c0e04d6" containerName="extract-utilities" Nov 28 11:44:36 crc kubenswrapper[4772]: E1128 11:44:36.237847 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6984d3-4637-4855-8ca2-18004c0e04d6" containerName="registry-server" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.237862 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6984d3-4637-4855-8ca2-18004c0e04d6" containerName="registry-server" Nov 28 11:44:36 crc kubenswrapper[4772]: E1128 11:44:36.238075 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6984d3-4637-4855-8ca2-18004c0e04d6" containerName="extract-content" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.238090 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6984d3-4637-4855-8ca2-18004c0e04d6" containerName="extract-content" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.238484 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba6984d3-4637-4855-8ca2-18004c0e04d6" containerName="registry-server" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.241013 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.258714 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs7j8"] Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.326888 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2pzt\" (UniqueName: \"kubernetes.io/projected/1ce6c210-9eec-46d9-a171-7e9e84f55b55-kube-api-access-k2pzt\") pod \"redhat-marketplace-vs7j8\" (UID: \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\") " pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.326981 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce6c210-9eec-46d9-a171-7e9e84f55b55-catalog-content\") pod \"redhat-marketplace-vs7j8\" (UID: \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\") " pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.327217 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce6c210-9eec-46d9-a171-7e9e84f55b55-utilities\") pod \"redhat-marketplace-vs7j8\" (UID: \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\") " pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.424520 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ntq52"] Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.427887 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.428865 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2pzt\" (UniqueName: \"kubernetes.io/projected/1ce6c210-9eec-46d9-a171-7e9e84f55b55-kube-api-access-k2pzt\") pod \"redhat-marketplace-vs7j8\" (UID: \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\") " pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.428936 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce6c210-9eec-46d9-a171-7e9e84f55b55-catalog-content\") pod \"redhat-marketplace-vs7j8\" (UID: \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\") " pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.429104 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce6c210-9eec-46d9-a171-7e9e84f55b55-utilities\") pod \"redhat-marketplace-vs7j8\" (UID: \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\") " pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.429823 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce6c210-9eec-46d9-a171-7e9e84f55b55-utilities\") pod \"redhat-marketplace-vs7j8\" (UID: \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\") " pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.430290 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce6c210-9eec-46d9-a171-7e9e84f55b55-catalog-content\") pod \"redhat-marketplace-vs7j8\" (UID: \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\") " pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.435134 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ntq52"] Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.461805 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2pzt\" (UniqueName: \"kubernetes.io/projected/1ce6c210-9eec-46d9-a171-7e9e84f55b55-kube-api-access-k2pzt\") pod \"redhat-marketplace-vs7j8\" (UID: \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\") " pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.531112 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/382fe87a-e016-4450-9309-6f52d12b1131-utilities\") pod \"redhat-operators-ntq52\" (UID: \"382fe87a-e016-4450-9309-6f52d12b1131\") " pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.531194 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/382fe87a-e016-4450-9309-6f52d12b1131-catalog-content\") pod \"redhat-operators-ntq52\" (UID: \"382fe87a-e016-4450-9309-6f52d12b1131\") " pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.531627 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqp4s\" (UniqueName: \"kubernetes.io/projected/382fe87a-e016-4450-9309-6f52d12b1131-kube-api-access-vqp4s\") pod \"redhat-operators-ntq52\" (UID: \"382fe87a-e016-4450-9309-6f52d12b1131\") " pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.584224 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.634796 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/382fe87a-e016-4450-9309-6f52d12b1131-utilities\") pod \"redhat-operators-ntq52\" (UID: \"382fe87a-e016-4450-9309-6f52d12b1131\") " pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.635453 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/382fe87a-e016-4450-9309-6f52d12b1131-catalog-content\") pod \"redhat-operators-ntq52\" (UID: \"382fe87a-e016-4450-9309-6f52d12b1131\") " pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.635601 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqp4s\" (UniqueName: \"kubernetes.io/projected/382fe87a-e016-4450-9309-6f52d12b1131-kube-api-access-vqp4s\") pod \"redhat-operators-ntq52\" (UID: \"382fe87a-e016-4450-9309-6f52d12b1131\") " pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.635966 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/382fe87a-e016-4450-9309-6f52d12b1131-utilities\") pod \"redhat-operators-ntq52\" (UID: \"382fe87a-e016-4450-9309-6f52d12b1131\") " pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.636063 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/382fe87a-e016-4450-9309-6f52d12b1131-catalog-content\") pod \"redhat-operators-ntq52\" (UID: \"382fe87a-e016-4450-9309-6f52d12b1131\") " pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.655043 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqp4s\" (UniqueName: \"kubernetes.io/projected/382fe87a-e016-4450-9309-6f52d12b1131-kube-api-access-vqp4s\") pod \"redhat-operators-ntq52\" (UID: \"382fe87a-e016-4450-9309-6f52d12b1131\") " pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:36 crc kubenswrapper[4772]: I1128 11:44:36.754581 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:37 crc kubenswrapper[4772]: I1128 11:44:37.096457 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs7j8"] Nov 28 11:44:37 crc kubenswrapper[4772]: I1128 11:44:37.261065 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ntq52"] Nov 28 11:44:38 crc kubenswrapper[4772]: I1128 11:44:38.089842 4772 generic.go:334] "Generic (PLEG): container finished" podID="1ce6c210-9eec-46d9-a171-7e9e84f55b55" containerID="f71862723b97bc6e0db28ccb9382cba66eeb212e9f9b6324247f4b5e35254fcf" exitCode=0 Nov 28 11:44:38 crc kubenswrapper[4772]: I1128 11:44:38.090048 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs7j8" event={"ID":"1ce6c210-9eec-46d9-a171-7e9e84f55b55","Type":"ContainerDied","Data":"f71862723b97bc6e0db28ccb9382cba66eeb212e9f9b6324247f4b5e35254fcf"} Nov 28 11:44:38 crc kubenswrapper[4772]: I1128 11:44:38.090173 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs7j8" event={"ID":"1ce6c210-9eec-46d9-a171-7e9e84f55b55","Type":"ContainerStarted","Data":"7092ed7fbd3a944cacc70e6d5f930676102070e94f4c9f3f5f7cf47905785b05"} Nov 28 11:44:38 crc kubenswrapper[4772]: I1128 11:44:38.092195 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 11:44:38 crc kubenswrapper[4772]: I1128 11:44:38.095929 4772 generic.go:334] "Generic (PLEG): container finished" podID="382fe87a-e016-4450-9309-6f52d12b1131" containerID="72e182b534c58326d1d958b99f81d87ddf249824cbdd4356cb99b07240dbed5b" exitCode=0 Nov 28 11:44:38 crc kubenswrapper[4772]: I1128 11:44:38.096010 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntq52" event={"ID":"382fe87a-e016-4450-9309-6f52d12b1131","Type":"ContainerDied","Data":"72e182b534c58326d1d958b99f81d87ddf249824cbdd4356cb99b07240dbed5b"} Nov 28 11:44:38 crc kubenswrapper[4772]: I1128 11:44:38.096057 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntq52" event={"ID":"382fe87a-e016-4450-9309-6f52d12b1131","Type":"ContainerStarted","Data":"98f28d4d6430069d47bd41a2459cc5a597b40218a52b1af73be688b3e0a56e67"} Nov 28 11:44:40 crc kubenswrapper[4772]: I1128 11:44:40.122958 4772 generic.go:334] "Generic (PLEG): container finished" podID="1ce6c210-9eec-46d9-a171-7e9e84f55b55" containerID="370ef8b4a7ca0b0b0672bd63ffbd4c5e5bffed19b017109646050a170a4b7563" exitCode=0 Nov 28 11:44:40 crc kubenswrapper[4772]: I1128 11:44:40.123169 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs7j8" event={"ID":"1ce6c210-9eec-46d9-a171-7e9e84f55b55","Type":"ContainerDied","Data":"370ef8b4a7ca0b0b0672bd63ffbd4c5e5bffed19b017109646050a170a4b7563"} Nov 28 11:44:40 crc kubenswrapper[4772]: I1128 11:44:40.139421 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntq52" event={"ID":"382fe87a-e016-4450-9309-6f52d12b1131","Type":"ContainerStarted","Data":"1e447ec5910bd33d56d902daa60e726e2ab59f1ef99269cc1fffa75e03f3d4c6"} Nov 28 11:44:41 crc kubenswrapper[4772]: I1128 11:44:41.154023 4772 generic.go:334] "Generic (PLEG): container finished" podID="382fe87a-e016-4450-9309-6f52d12b1131" containerID="1e447ec5910bd33d56d902daa60e726e2ab59f1ef99269cc1fffa75e03f3d4c6" exitCode=0 Nov 28 11:44:41 crc kubenswrapper[4772]: I1128 11:44:41.154136 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntq52" event={"ID":"382fe87a-e016-4450-9309-6f52d12b1131","Type":"ContainerDied","Data":"1e447ec5910bd33d56d902daa60e726e2ab59f1ef99269cc1fffa75e03f3d4c6"} Nov 28 11:44:43 crc kubenswrapper[4772]: I1128 11:44:43.179417 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs7j8" event={"ID":"1ce6c210-9eec-46d9-a171-7e9e84f55b55","Type":"ContainerStarted","Data":"116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d"} Nov 28 11:44:43 crc kubenswrapper[4772]: I1128 11:44:43.183177 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntq52" event={"ID":"382fe87a-e016-4450-9309-6f52d12b1131","Type":"ContainerStarted","Data":"138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38"} Nov 28 11:44:43 crc kubenswrapper[4772]: I1128 11:44:43.201952 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vs7j8" podStartSLOduration=4.583747228 podStartE2EDuration="7.201928477s" podCreationTimestamp="2025-11-28 11:44:36 +0000 UTC" firstStartedPulling="2025-11-28 11:44:38.091901183 +0000 UTC m=+2276.415144420" lastFinishedPulling="2025-11-28 11:44:40.710082432 +0000 UTC m=+2279.033325669" observedRunningTime="2025-11-28 11:44:43.198750331 +0000 UTC m=+2281.521993568" watchObservedRunningTime="2025-11-28 11:44:43.201928477 +0000 UTC m=+2281.525171704" Nov 28 11:44:43 crc kubenswrapper[4772]: I1128 11:44:43.233604 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ntq52" podStartSLOduration=3.125096947 podStartE2EDuration="7.233578895s" podCreationTimestamp="2025-11-28 11:44:36 +0000 UTC" firstStartedPulling="2025-11-28 11:44:38.097260028 +0000 UTC m=+2276.420503255" lastFinishedPulling="2025-11-28 11:44:42.205741976 +0000 UTC m=+2280.528985203" observedRunningTime="2025-11-28 11:44:43.227080699 +0000 UTC m=+2281.550323926" watchObservedRunningTime="2025-11-28 11:44:43.233578895 +0000 UTC m=+2281.556822122" Nov 28 11:44:46 crc kubenswrapper[4772]: I1128 11:44:46.584582 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:46 crc kubenswrapper[4772]: I1128 11:44:46.585172 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:46 crc kubenswrapper[4772]: I1128 11:44:46.633262 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:46 crc kubenswrapper[4772]: I1128 11:44:46.755802 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:46 crc kubenswrapper[4772]: I1128 11:44:46.756043 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:47 crc kubenswrapper[4772]: I1128 11:44:47.277045 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:47 crc kubenswrapper[4772]: I1128 11:44:47.413739 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs7j8"] Nov 28 11:44:47 crc kubenswrapper[4772]: I1128 11:44:47.807834 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ntq52" podUID="382fe87a-e016-4450-9309-6f52d12b1131" containerName="registry-server" probeResult="failure" output=< Nov 28 11:44:47 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 28 11:44:47 crc kubenswrapper[4772]: > Nov 28 11:44:49 crc kubenswrapper[4772]: I1128 11:44:49.252013 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vs7j8" podUID="1ce6c210-9eec-46d9-a171-7e9e84f55b55" containerName="registry-server" containerID="cri-o://116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d" gracePeriod=2 Nov 28 11:44:49 crc kubenswrapper[4772]: I1128 11:44:49.805861 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:49 crc kubenswrapper[4772]: I1128 11:44:49.944325 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2pzt\" (UniqueName: \"kubernetes.io/projected/1ce6c210-9eec-46d9-a171-7e9e84f55b55-kube-api-access-k2pzt\") pod \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\" (UID: \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\") " Nov 28 11:44:49 crc kubenswrapper[4772]: I1128 11:44:49.944706 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce6c210-9eec-46d9-a171-7e9e84f55b55-utilities\") pod \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\" (UID: \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\") " Nov 28 11:44:49 crc kubenswrapper[4772]: I1128 11:44:49.944749 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce6c210-9eec-46d9-a171-7e9e84f55b55-catalog-content\") pod \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\" (UID: \"1ce6c210-9eec-46d9-a171-7e9e84f55b55\") " Nov 28 11:44:49 crc kubenswrapper[4772]: I1128 11:44:49.945835 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ce6c210-9eec-46d9-a171-7e9e84f55b55-utilities" (OuterVolumeSpecName: "utilities") pod "1ce6c210-9eec-46d9-a171-7e9e84f55b55" (UID: "1ce6c210-9eec-46d9-a171-7e9e84f55b55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:44:49 crc kubenswrapper[4772]: I1128 11:44:49.949494 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ce6c210-9eec-46d9-a171-7e9e84f55b55-kube-api-access-k2pzt" (OuterVolumeSpecName: "kube-api-access-k2pzt") pod "1ce6c210-9eec-46d9-a171-7e9e84f55b55" (UID: "1ce6c210-9eec-46d9-a171-7e9e84f55b55"). InnerVolumeSpecName "kube-api-access-k2pzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:44:49 crc kubenswrapper[4772]: I1128 11:44:49.970551 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ce6c210-9eec-46d9-a171-7e9e84f55b55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ce6c210-9eec-46d9-a171-7e9e84f55b55" (UID: "1ce6c210-9eec-46d9-a171-7e9e84f55b55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.047688 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ce6c210-9eec-46d9-a171-7e9e84f55b55-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.047748 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ce6c210-9eec-46d9-a171-7e9e84f55b55-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.047769 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2pzt\" (UniqueName: \"kubernetes.io/projected/1ce6c210-9eec-46d9-a171-7e9e84f55b55-kube-api-access-k2pzt\") on node \"crc\" DevicePath \"\"" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.267475 4772 generic.go:334] "Generic (PLEG): container finished" podID="1ce6c210-9eec-46d9-a171-7e9e84f55b55" containerID="116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d" exitCode=0 Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.267531 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs7j8" event={"ID":"1ce6c210-9eec-46d9-a171-7e9e84f55b55","Type":"ContainerDied","Data":"116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d"} Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.267562 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs7j8" event={"ID":"1ce6c210-9eec-46d9-a171-7e9e84f55b55","Type":"ContainerDied","Data":"7092ed7fbd3a944cacc70e6d5f930676102070e94f4c9f3f5f7cf47905785b05"} Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.267572 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vs7j8" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.267585 4772 scope.go:117] "RemoveContainer" containerID="116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.317103 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs7j8"] Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.319957 4772 scope.go:117] "RemoveContainer" containerID="370ef8b4a7ca0b0b0672bd63ffbd4c5e5bffed19b017109646050a170a4b7563" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.329416 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs7j8"] Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.353852 4772 scope.go:117] "RemoveContainer" containerID="f71862723b97bc6e0db28ccb9382cba66eeb212e9f9b6324247f4b5e35254fcf" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.425501 4772 scope.go:117] "RemoveContainer" containerID="116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d" Nov 28 11:44:50 crc kubenswrapper[4772]: E1128 11:44:50.426270 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d\": container with ID starting with 116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d not found: ID does not exist" containerID="116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.426337 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d"} err="failed to get container status \"116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d\": rpc error: code = NotFound desc = could not find container \"116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d\": container with ID starting with 116f47ba6297bf11b9b452af034cab31347f5ed30a147e2f90016c62339e588d not found: ID does not exist" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.426530 4772 scope.go:117] "RemoveContainer" containerID="370ef8b4a7ca0b0b0672bd63ffbd4c5e5bffed19b017109646050a170a4b7563" Nov 28 11:44:50 crc kubenswrapper[4772]: E1128 11:44:50.427047 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"370ef8b4a7ca0b0b0672bd63ffbd4c5e5bffed19b017109646050a170a4b7563\": container with ID starting with 370ef8b4a7ca0b0b0672bd63ffbd4c5e5bffed19b017109646050a170a4b7563 not found: ID does not exist" containerID="370ef8b4a7ca0b0b0672bd63ffbd4c5e5bffed19b017109646050a170a4b7563" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.427117 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370ef8b4a7ca0b0b0672bd63ffbd4c5e5bffed19b017109646050a170a4b7563"} err="failed to get container status \"370ef8b4a7ca0b0b0672bd63ffbd4c5e5bffed19b017109646050a170a4b7563\": rpc error: code = NotFound desc = could not find container \"370ef8b4a7ca0b0b0672bd63ffbd4c5e5bffed19b017109646050a170a4b7563\": container with ID starting with 370ef8b4a7ca0b0b0672bd63ffbd4c5e5bffed19b017109646050a170a4b7563 not found: ID does not exist" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.427162 4772 scope.go:117] "RemoveContainer" containerID="f71862723b97bc6e0db28ccb9382cba66eeb212e9f9b6324247f4b5e35254fcf" Nov 28 11:44:50 crc kubenswrapper[4772]: E1128 11:44:50.427643 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71862723b97bc6e0db28ccb9382cba66eeb212e9f9b6324247f4b5e35254fcf\": container with ID starting with f71862723b97bc6e0db28ccb9382cba66eeb212e9f9b6324247f4b5e35254fcf not found: ID does not exist" containerID="f71862723b97bc6e0db28ccb9382cba66eeb212e9f9b6324247f4b5e35254fcf" Nov 28 11:44:50 crc kubenswrapper[4772]: I1128 11:44:50.427683 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71862723b97bc6e0db28ccb9382cba66eeb212e9f9b6324247f4b5e35254fcf"} err="failed to get container status \"f71862723b97bc6e0db28ccb9382cba66eeb212e9f9b6324247f4b5e35254fcf\": rpc error: code = NotFound desc = could not find container \"f71862723b97bc6e0db28ccb9382cba66eeb212e9f9b6324247f4b5e35254fcf\": container with ID starting with f71862723b97bc6e0db28ccb9382cba66eeb212e9f9b6324247f4b5e35254fcf not found: ID does not exist" Nov 28 11:44:52 crc kubenswrapper[4772]: I1128 11:44:52.008416 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ce6c210-9eec-46d9-a171-7e9e84f55b55" path="/var/lib/kubelet/pods/1ce6c210-9eec-46d9-a171-7e9e84f55b55/volumes" Nov 28 11:44:56 crc kubenswrapper[4772]: I1128 11:44:56.820353 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:56 crc kubenswrapper[4772]: I1128 11:44:56.902271 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:57 crc kubenswrapper[4772]: I1128 11:44:57.078204 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ntq52"] Nov 28 11:44:58 crc kubenswrapper[4772]: I1128 11:44:58.379394 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ntq52" podUID="382fe87a-e016-4450-9309-6f52d12b1131" containerName="registry-server" containerID="cri-o://138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38" gracePeriod=2 Nov 28 11:44:58 crc kubenswrapper[4772]: I1128 11:44:58.868248 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:58 crc kubenswrapper[4772]: I1128 11:44:58.962151 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/382fe87a-e016-4450-9309-6f52d12b1131-catalog-content\") pod \"382fe87a-e016-4450-9309-6f52d12b1131\" (UID: \"382fe87a-e016-4450-9309-6f52d12b1131\") " Nov 28 11:44:58 crc kubenswrapper[4772]: I1128 11:44:58.962499 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqp4s\" (UniqueName: \"kubernetes.io/projected/382fe87a-e016-4450-9309-6f52d12b1131-kube-api-access-vqp4s\") pod \"382fe87a-e016-4450-9309-6f52d12b1131\" (UID: \"382fe87a-e016-4450-9309-6f52d12b1131\") " Nov 28 11:44:58 crc kubenswrapper[4772]: I1128 11:44:58.962539 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/382fe87a-e016-4450-9309-6f52d12b1131-utilities\") pod \"382fe87a-e016-4450-9309-6f52d12b1131\" (UID: \"382fe87a-e016-4450-9309-6f52d12b1131\") " Nov 28 11:44:58 crc kubenswrapper[4772]: I1128 11:44:58.964730 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/382fe87a-e016-4450-9309-6f52d12b1131-utilities" (OuterVolumeSpecName: "utilities") pod "382fe87a-e016-4450-9309-6f52d12b1131" (UID: "382fe87a-e016-4450-9309-6f52d12b1131"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:44:58 crc kubenswrapper[4772]: I1128 11:44:58.977707 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382fe87a-e016-4450-9309-6f52d12b1131-kube-api-access-vqp4s" (OuterVolumeSpecName: "kube-api-access-vqp4s") pod "382fe87a-e016-4450-9309-6f52d12b1131" (UID: "382fe87a-e016-4450-9309-6f52d12b1131"). InnerVolumeSpecName "kube-api-access-vqp4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.067003 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqp4s\" (UniqueName: \"kubernetes.io/projected/382fe87a-e016-4450-9309-6f52d12b1131-kube-api-access-vqp4s\") on node \"crc\" DevicePath \"\"" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.067042 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/382fe87a-e016-4450-9309-6f52d12b1131-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.083295 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/382fe87a-e016-4450-9309-6f52d12b1131-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "382fe87a-e016-4450-9309-6f52d12b1131" (UID: "382fe87a-e016-4450-9309-6f52d12b1131"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.169625 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/382fe87a-e016-4450-9309-6f52d12b1131-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.398000 4772 generic.go:334] "Generic (PLEG): container finished" podID="382fe87a-e016-4450-9309-6f52d12b1131" containerID="138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38" exitCode=0 Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.398139 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntq52" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.398166 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntq52" event={"ID":"382fe87a-e016-4450-9309-6f52d12b1131","Type":"ContainerDied","Data":"138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38"} Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.398749 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntq52" event={"ID":"382fe87a-e016-4450-9309-6f52d12b1131","Type":"ContainerDied","Data":"98f28d4d6430069d47bd41a2459cc5a597b40218a52b1af73be688b3e0a56e67"} Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.398793 4772 scope.go:117] "RemoveContainer" containerID="138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.448999 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ntq52"] Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.455844 4772 scope.go:117] "RemoveContainer" containerID="1e447ec5910bd33d56d902daa60e726e2ab59f1ef99269cc1fffa75e03f3d4c6" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.457287 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ntq52"] Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.497120 4772 scope.go:117] "RemoveContainer" containerID="72e182b534c58326d1d958b99f81d87ddf249824cbdd4356cb99b07240dbed5b" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.567132 4772 scope.go:117] "RemoveContainer" containerID="138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38" Nov 28 11:44:59 crc kubenswrapper[4772]: E1128 11:44:59.567737 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38\": container with ID starting with 138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38 not found: ID does not exist" containerID="138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.567804 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38"} err="failed to get container status \"138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38\": rpc error: code = NotFound desc = could not find container \"138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38\": container with ID starting with 138c91bcc157f0f81ae1d56f51532a80bb88ae9f2d75359942179997466a1f38 not found: ID does not exist" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.567828 4772 scope.go:117] "RemoveContainer" containerID="1e447ec5910bd33d56d902daa60e726e2ab59f1ef99269cc1fffa75e03f3d4c6" Nov 28 11:44:59 crc kubenswrapper[4772]: E1128 11:44:59.568554 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e447ec5910bd33d56d902daa60e726e2ab59f1ef99269cc1fffa75e03f3d4c6\": container with ID starting with 1e447ec5910bd33d56d902daa60e726e2ab59f1ef99269cc1fffa75e03f3d4c6 not found: ID does not exist" containerID="1e447ec5910bd33d56d902daa60e726e2ab59f1ef99269cc1fffa75e03f3d4c6" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.568605 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e447ec5910bd33d56d902daa60e726e2ab59f1ef99269cc1fffa75e03f3d4c6"} err="failed to get container status \"1e447ec5910bd33d56d902daa60e726e2ab59f1ef99269cc1fffa75e03f3d4c6\": rpc error: code = NotFound desc = could not find container \"1e447ec5910bd33d56d902daa60e726e2ab59f1ef99269cc1fffa75e03f3d4c6\": container with ID starting with 1e447ec5910bd33d56d902daa60e726e2ab59f1ef99269cc1fffa75e03f3d4c6 not found: ID does not exist" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.568641 4772 scope.go:117] "RemoveContainer" containerID="72e182b534c58326d1d958b99f81d87ddf249824cbdd4356cb99b07240dbed5b" Nov 28 11:44:59 crc kubenswrapper[4772]: E1128 11:44:59.569099 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72e182b534c58326d1d958b99f81d87ddf249824cbdd4356cb99b07240dbed5b\": container with ID starting with 72e182b534c58326d1d958b99f81d87ddf249824cbdd4356cb99b07240dbed5b not found: ID does not exist" containerID="72e182b534c58326d1d958b99f81d87ddf249824cbdd4356cb99b07240dbed5b" Nov 28 11:44:59 crc kubenswrapper[4772]: I1128 11:44:59.569257 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72e182b534c58326d1d958b99f81d87ddf249824cbdd4356cb99b07240dbed5b"} err="failed to get container status \"72e182b534c58326d1d958b99f81d87ddf249824cbdd4356cb99b07240dbed5b\": rpc error: code = NotFound desc = could not find container \"72e182b534c58326d1d958b99f81d87ddf249824cbdd4356cb99b07240dbed5b\": container with ID starting with 72e182b534c58326d1d958b99f81d87ddf249824cbdd4356cb99b07240dbed5b not found: ID does not exist" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.033107 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="382fe87a-e016-4450-9309-6f52d12b1131" path="/var/lib/kubelet/pods/382fe87a-e016-4450-9309-6f52d12b1131/volumes" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.160338 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw"] Nov 28 11:45:00 crc kubenswrapper[4772]: E1128 11:45:00.165899 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382fe87a-e016-4450-9309-6f52d12b1131" containerName="extract-utilities" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.166173 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="382fe87a-e016-4450-9309-6f52d12b1131" containerName="extract-utilities" Nov 28 11:45:00 crc kubenswrapper[4772]: E1128 11:45:00.166286 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce6c210-9eec-46d9-a171-7e9e84f55b55" containerName="extract-utilities" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.166417 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce6c210-9eec-46d9-a171-7e9e84f55b55" containerName="extract-utilities" Nov 28 11:45:00 crc kubenswrapper[4772]: E1128 11:45:00.166521 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382fe87a-e016-4450-9309-6f52d12b1131" containerName="extract-content" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.166623 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="382fe87a-e016-4450-9309-6f52d12b1131" containerName="extract-content" Nov 28 11:45:00 crc kubenswrapper[4772]: E1128 11:45:00.166723 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce6c210-9eec-46d9-a171-7e9e84f55b55" containerName="registry-server" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.166837 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce6c210-9eec-46d9-a171-7e9e84f55b55" containerName="registry-server" Nov 28 11:45:00 crc kubenswrapper[4772]: E1128 11:45:00.166928 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382fe87a-e016-4450-9309-6f52d12b1131" containerName="registry-server" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.167011 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="382fe87a-e016-4450-9309-6f52d12b1131" containerName="registry-server" Nov 28 11:45:00 crc kubenswrapper[4772]: E1128 11:45:00.167102 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce6c210-9eec-46d9-a171-7e9e84f55b55" containerName="extract-content" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.167189 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce6c210-9eec-46d9-a171-7e9e84f55b55" containerName="extract-content" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.167591 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="382fe87a-e016-4450-9309-6f52d12b1131" containerName="registry-server" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.167711 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce6c210-9eec-46d9-a171-7e9e84f55b55" containerName="registry-server" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.168928 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.172016 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.172538 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.184262 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw"] Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.292508 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/812d0ce6-f4f6-4201-8091-2c9104854703-config-volume\") pod \"collect-profiles-29405505-2vqxw\" (UID: \"812d0ce6-f4f6-4201-8091-2c9104854703\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.292583 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/812d0ce6-f4f6-4201-8091-2c9104854703-secret-volume\") pod \"collect-profiles-29405505-2vqxw\" (UID: \"812d0ce6-f4f6-4201-8091-2c9104854703\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.292616 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69gxd\" (UniqueName: \"kubernetes.io/projected/812d0ce6-f4f6-4201-8091-2c9104854703-kube-api-access-69gxd\") pod \"collect-profiles-29405505-2vqxw\" (UID: \"812d0ce6-f4f6-4201-8091-2c9104854703\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.395465 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/812d0ce6-f4f6-4201-8091-2c9104854703-config-volume\") pod \"collect-profiles-29405505-2vqxw\" (UID: \"812d0ce6-f4f6-4201-8091-2c9104854703\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.395565 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/812d0ce6-f4f6-4201-8091-2c9104854703-secret-volume\") pod \"collect-profiles-29405505-2vqxw\" (UID: \"812d0ce6-f4f6-4201-8091-2c9104854703\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.395628 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69gxd\" (UniqueName: \"kubernetes.io/projected/812d0ce6-f4f6-4201-8091-2c9104854703-kube-api-access-69gxd\") pod \"collect-profiles-29405505-2vqxw\" (UID: \"812d0ce6-f4f6-4201-8091-2c9104854703\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.398191 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/812d0ce6-f4f6-4201-8091-2c9104854703-config-volume\") pod \"collect-profiles-29405505-2vqxw\" (UID: \"812d0ce6-f4f6-4201-8091-2c9104854703\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.411155 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/812d0ce6-f4f6-4201-8091-2c9104854703-secret-volume\") pod \"collect-profiles-29405505-2vqxw\" (UID: \"812d0ce6-f4f6-4201-8091-2c9104854703\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.422151 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69gxd\" (UniqueName: \"kubernetes.io/projected/812d0ce6-f4f6-4201-8091-2c9104854703-kube-api-access-69gxd\") pod \"collect-profiles-29405505-2vqxw\" (UID: \"812d0ce6-f4f6-4201-8091-2c9104854703\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:00 crc kubenswrapper[4772]: I1128 11:45:00.495539 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:01 crc kubenswrapper[4772]: I1128 11:45:01.047686 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw"] Nov 28 11:45:01 crc kubenswrapper[4772]: I1128 11:45:01.427845 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" event={"ID":"812d0ce6-f4f6-4201-8091-2c9104854703","Type":"ContainerStarted","Data":"b48c5933063a7931fcbe459f9be792a987f84cddffb236676848b567944bc975"} Nov 28 11:45:01 crc kubenswrapper[4772]: I1128 11:45:01.428206 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" event={"ID":"812d0ce6-f4f6-4201-8091-2c9104854703","Type":"ContainerStarted","Data":"018c93b3ba16a4c2793706ae97fc73884aa9e65ac5f05281ff1b873879eac966"} Nov 28 11:45:01 crc kubenswrapper[4772]: I1128 11:45:01.458555 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" podStartSLOduration=1.458531029 podStartE2EDuration="1.458531029s" podCreationTimestamp="2025-11-28 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:45:01.456087333 +0000 UTC m=+2299.779330560" watchObservedRunningTime="2025-11-28 11:45:01.458531029 +0000 UTC m=+2299.781774246" Nov 28 11:45:02 crc kubenswrapper[4772]: I1128 11:45:02.441178 4772 generic.go:334] "Generic (PLEG): container finished" podID="812d0ce6-f4f6-4201-8091-2c9104854703" containerID="b48c5933063a7931fcbe459f9be792a987f84cddffb236676848b567944bc975" exitCode=0 Nov 28 11:45:02 crc kubenswrapper[4772]: I1128 11:45:02.441244 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" event={"ID":"812d0ce6-f4f6-4201-8091-2c9104854703","Type":"ContainerDied","Data":"b48c5933063a7931fcbe459f9be792a987f84cddffb236676848b567944bc975"} Nov 28 11:45:03 crc kubenswrapper[4772]: I1128 11:45:03.789009 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:03 crc kubenswrapper[4772]: I1128 11:45:03.898736 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69gxd\" (UniqueName: \"kubernetes.io/projected/812d0ce6-f4f6-4201-8091-2c9104854703-kube-api-access-69gxd\") pod \"812d0ce6-f4f6-4201-8091-2c9104854703\" (UID: \"812d0ce6-f4f6-4201-8091-2c9104854703\") " Nov 28 11:45:03 crc kubenswrapper[4772]: I1128 11:45:03.898820 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/812d0ce6-f4f6-4201-8091-2c9104854703-config-volume\") pod \"812d0ce6-f4f6-4201-8091-2c9104854703\" (UID: \"812d0ce6-f4f6-4201-8091-2c9104854703\") " Nov 28 11:45:03 crc kubenswrapper[4772]: I1128 11:45:03.899657 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812d0ce6-f4f6-4201-8091-2c9104854703-config-volume" (OuterVolumeSpecName: "config-volume") pod "812d0ce6-f4f6-4201-8091-2c9104854703" (UID: "812d0ce6-f4f6-4201-8091-2c9104854703"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:45:03 crc kubenswrapper[4772]: I1128 11:45:03.900503 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/812d0ce6-f4f6-4201-8091-2c9104854703-secret-volume\") pod \"812d0ce6-f4f6-4201-8091-2c9104854703\" (UID: \"812d0ce6-f4f6-4201-8091-2c9104854703\") " Nov 28 11:45:03 crc kubenswrapper[4772]: I1128 11:45:03.901163 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/812d0ce6-f4f6-4201-8091-2c9104854703-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 11:45:03 crc kubenswrapper[4772]: I1128 11:45:03.907280 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/812d0ce6-f4f6-4201-8091-2c9104854703-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "812d0ce6-f4f6-4201-8091-2c9104854703" (UID: "812d0ce6-f4f6-4201-8091-2c9104854703"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:45:03 crc kubenswrapper[4772]: I1128 11:45:03.907403 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812d0ce6-f4f6-4201-8091-2c9104854703-kube-api-access-69gxd" (OuterVolumeSpecName: "kube-api-access-69gxd") pod "812d0ce6-f4f6-4201-8091-2c9104854703" (UID: "812d0ce6-f4f6-4201-8091-2c9104854703"). InnerVolumeSpecName "kube-api-access-69gxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:45:04 crc kubenswrapper[4772]: I1128 11:45:04.003686 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69gxd\" (UniqueName: \"kubernetes.io/projected/812d0ce6-f4f6-4201-8091-2c9104854703-kube-api-access-69gxd\") on node \"crc\" DevicePath \"\"" Nov 28 11:45:04 crc kubenswrapper[4772]: I1128 11:45:04.003724 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/812d0ce6-f4f6-4201-8091-2c9104854703-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 11:45:04 crc kubenswrapper[4772]: I1128 11:45:04.472592 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" event={"ID":"812d0ce6-f4f6-4201-8091-2c9104854703","Type":"ContainerDied","Data":"018c93b3ba16a4c2793706ae97fc73884aa9e65ac5f05281ff1b873879eac966"} Nov 28 11:45:04 crc kubenswrapper[4772]: I1128 11:45:04.472980 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="018c93b3ba16a4c2793706ae97fc73884aa9e65ac5f05281ff1b873879eac966" Nov 28 11:45:04 crc kubenswrapper[4772]: I1128 11:45:04.472712 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2vqxw" Nov 28 11:45:04 crc kubenswrapper[4772]: I1128 11:45:04.554055 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf"] Nov 28 11:45:04 crc kubenswrapper[4772]: I1128 11:45:04.566461 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405460-hh2nf"] Nov 28 11:45:06 crc kubenswrapper[4772]: I1128 11:45:06.014133 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24396734-e237-4fb3-9cae-8c08db3a9122" path="/var/lib/kubelet/pods/24396734-e237-4fb3-9cae-8c08db3a9122/volumes" Nov 28 11:45:46 crc kubenswrapper[4772]: I1128 11:45:46.942735 4772 scope.go:117] "RemoveContainer" containerID="3f5eb90ffa04d24c387d4387b20468ba508a0465e449bba41273179694d6dd3d" Nov 28 11:45:53 crc kubenswrapper[4772]: I1128 11:45:53.896293 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:45:53 crc kubenswrapper[4772]: I1128 11:45:53.896999 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:46:23 crc kubenswrapper[4772]: I1128 11:46:23.896268 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:46:23 crc kubenswrapper[4772]: I1128 11:46:23.897095 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:46:31 crc kubenswrapper[4772]: I1128 11:46:31.545333 4772 generic.go:334] "Generic (PLEG): container finished" podID="0b865b7c-a1c7-4f0b-b289-d980f76a946d" containerID="8e11f93450ffcd9f28a9042f2f39a57638923c88d4ee9527e4e68c3021c485e2" exitCode=0 Nov 28 11:46:31 crc kubenswrapper[4772]: I1128 11:46:31.545471 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" event={"ID":"0b865b7c-a1c7-4f0b-b289-d980f76a946d","Type":"ContainerDied","Data":"8e11f93450ffcd9f28a9042f2f39a57638923c88d4ee9527e4e68c3021c485e2"} Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.009405 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.075815 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-inventory\") pod \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.075924 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch7sz\" (UniqueName: \"kubernetes.io/projected/0b865b7c-a1c7-4f0b-b289-d980f76a946d-kube-api-access-ch7sz\") pod \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.076010 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-ssh-key\") pod \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.076049 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-libvirt-combined-ca-bundle\") pod \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.076119 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-libvirt-secret-0\") pod \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\" (UID: \"0b865b7c-a1c7-4f0b-b289-d980f76a946d\") " Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.083090 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b865b7c-a1c7-4f0b-b289-d980f76a946d-kube-api-access-ch7sz" (OuterVolumeSpecName: "kube-api-access-ch7sz") pod "0b865b7c-a1c7-4f0b-b289-d980f76a946d" (UID: "0b865b7c-a1c7-4f0b-b289-d980f76a946d"). InnerVolumeSpecName "kube-api-access-ch7sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.085550 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0b865b7c-a1c7-4f0b-b289-d980f76a946d" (UID: "0b865b7c-a1c7-4f0b-b289-d980f76a946d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.108885 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "0b865b7c-a1c7-4f0b-b289-d980f76a946d" (UID: "0b865b7c-a1c7-4f0b-b289-d980f76a946d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.115854 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0b865b7c-a1c7-4f0b-b289-d980f76a946d" (UID: "0b865b7c-a1c7-4f0b-b289-d980f76a946d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.130706 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-inventory" (OuterVolumeSpecName: "inventory") pod "0b865b7c-a1c7-4f0b-b289-d980f76a946d" (UID: "0b865b7c-a1c7-4f0b-b289-d980f76a946d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.180113 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.180169 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch7sz\" (UniqueName: \"kubernetes.io/projected/0b865b7c-a1c7-4f0b-b289-d980f76a946d-kube-api-access-ch7sz\") on node \"crc\" DevicePath \"\"" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.180183 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.180196 4772 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.180213 4772 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0b865b7c-a1c7-4f0b-b289-d980f76a946d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.572866 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" event={"ID":"0b865b7c-a1c7-4f0b-b289-d980f76a946d","Type":"ContainerDied","Data":"79a8368939a562c1458df4fcf793a29924c915d98946e156309773f33474b7ae"} Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.572913 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79a8368939a562c1458df4fcf793a29924c915d98946e156309773f33474b7ae" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.573295 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.690254 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2"] Nov 28 11:46:33 crc kubenswrapper[4772]: E1128 11:46:33.690673 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812d0ce6-f4f6-4201-8091-2c9104854703" containerName="collect-profiles" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.690692 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="812d0ce6-f4f6-4201-8091-2c9104854703" containerName="collect-profiles" Nov 28 11:46:33 crc kubenswrapper[4772]: E1128 11:46:33.690721 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b865b7c-a1c7-4f0b-b289-d980f76a946d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.690732 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b865b7c-a1c7-4f0b-b289-d980f76a946d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.690923 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b865b7c-a1c7-4f0b-b289-d980f76a946d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.690941 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="812d0ce6-f4f6-4201-8091-2c9104854703" containerName="collect-profiles" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.691659 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.695110 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.695937 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.696248 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.696256 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.696269 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.698844 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.698981 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.710245 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2"] Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.790686 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.790777 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.790828 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.791122 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.791276 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3bcf40ed-6681-4685-8277-c31e223c9686-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.791433 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckbrr\" (UniqueName: \"kubernetes.io/projected/3bcf40ed-6681-4685-8277-c31e223c9686-kube-api-access-ckbrr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.791744 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.791892 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.791986 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.894153 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.894237 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.894273 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.894337 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.894389 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.894427 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.894478 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.894513 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3bcf40ed-6681-4685-8277-c31e223c9686-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.894543 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckbrr\" (UniqueName: \"kubernetes.io/projected/3bcf40ed-6681-4685-8277-c31e223c9686-kube-api-access-ckbrr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.896406 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3bcf40ed-6681-4685-8277-c31e223c9686-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.898660 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.898673 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.900021 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.900530 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.901895 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.902536 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.903470 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:33 crc kubenswrapper[4772]: I1128 11:46:33.922476 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckbrr\" (UniqueName: \"kubernetes.io/projected/3bcf40ed-6681-4685-8277-c31e223c9686-kube-api-access-ckbrr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pl4v2\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:34 crc kubenswrapper[4772]: I1128 11:46:34.013675 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:46:34 crc kubenswrapper[4772]: W1128 11:46:34.623825 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bcf40ed_6681_4685_8277_c31e223c9686.slice/crio-691fc7793e7c8e6c9926f3e87c0d366885d960dee88b913a6e5d8e4d15e88b49 WatchSource:0}: Error finding container 691fc7793e7c8e6c9926f3e87c0d366885d960dee88b913a6e5d8e4d15e88b49: Status 404 returned error can't find the container with id 691fc7793e7c8e6c9926f3e87c0d366885d960dee88b913a6e5d8e4d15e88b49 Nov 28 11:46:34 crc kubenswrapper[4772]: I1128 11:46:34.626247 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2"] Nov 28 11:46:35 crc kubenswrapper[4772]: I1128 11:46:35.596926 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" event={"ID":"3bcf40ed-6681-4685-8277-c31e223c9686","Type":"ContainerStarted","Data":"32d3cb64869e028aa2de368793caa045ddf7060f2b5658cc4a3db6a1c2e19349"} Nov 28 11:46:35 crc kubenswrapper[4772]: I1128 11:46:35.597876 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" event={"ID":"3bcf40ed-6681-4685-8277-c31e223c9686","Type":"ContainerStarted","Data":"691fc7793e7c8e6c9926f3e87c0d366885d960dee88b913a6e5d8e4d15e88b49"} Nov 28 11:46:35 crc kubenswrapper[4772]: I1128 11:46:35.615295 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" podStartSLOduration=2.092818231 podStartE2EDuration="2.615268947s" podCreationTimestamp="2025-11-28 11:46:33 +0000 UTC" firstStartedPulling="2025-11-28 11:46:34.628535432 +0000 UTC m=+2392.951778669" lastFinishedPulling="2025-11-28 11:46:35.150986138 +0000 UTC m=+2393.474229385" observedRunningTime="2025-11-28 11:46:35.614221739 +0000 UTC m=+2393.937464996" watchObservedRunningTime="2025-11-28 11:46:35.615268947 +0000 UTC m=+2393.938512184" Nov 28 11:46:53 crc kubenswrapper[4772]: I1128 11:46:53.896644 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:46:53 crc kubenswrapper[4772]: I1128 11:46:53.898389 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:46:53 crc kubenswrapper[4772]: I1128 11:46:53.898544 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:46:53 crc kubenswrapper[4772]: I1128 11:46:53.899488 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:46:53 crc kubenswrapper[4772]: I1128 11:46:53.899678 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" gracePeriod=600 Nov 28 11:46:54 crc kubenswrapper[4772]: E1128 11:46:54.035999 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:46:54 crc kubenswrapper[4772]: I1128 11:46:54.858025 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" exitCode=0 Nov 28 11:46:54 crc kubenswrapper[4772]: I1128 11:46:54.858077 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522"} Nov 28 11:46:54 crc kubenswrapper[4772]: I1128 11:46:54.858134 4772 scope.go:117] "RemoveContainer" containerID="99211b40d876d871be4ab3485a6ae32012cb5b156fc3de5699bcb68b9f4c3d94" Nov 28 11:46:54 crc kubenswrapper[4772]: I1128 11:46:54.858942 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:46:54 crc kubenswrapper[4772]: E1128 11:46:54.859268 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:47:08 crc kubenswrapper[4772]: I1128 11:47:08.995139 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:47:08 crc kubenswrapper[4772]: E1128 11:47:08.996176 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:47:23 crc kubenswrapper[4772]: I1128 11:47:23.995282 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:47:23 crc kubenswrapper[4772]: E1128 11:47:23.996162 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:47:38 crc kubenswrapper[4772]: I1128 11:47:38.995136 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:47:38 crc kubenswrapper[4772]: E1128 11:47:38.996144 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:47:52 crc kubenswrapper[4772]: I1128 11:47:52.008343 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:47:52 crc kubenswrapper[4772]: E1128 11:47:52.010416 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:48:06 crc kubenswrapper[4772]: I1128 11:48:06.994908 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:48:06 crc kubenswrapper[4772]: E1128 11:48:06.995764 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:48:19 crc kubenswrapper[4772]: I1128 11:48:19.994883 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:48:19 crc kubenswrapper[4772]: E1128 11:48:19.996311 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:48:32 crc kubenswrapper[4772]: I1128 11:48:32.000032 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:48:32 crc kubenswrapper[4772]: E1128 11:48:32.000842 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:48:44 crc kubenswrapper[4772]: I1128 11:48:44.995664 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:48:44 crc kubenswrapper[4772]: E1128 11:48:44.996838 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:48:55 crc kubenswrapper[4772]: I1128 11:48:55.995240 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:48:55 crc kubenswrapper[4772]: E1128 11:48:55.996468 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:49:10 crc kubenswrapper[4772]: I1128 11:49:10.994478 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:49:10 crc kubenswrapper[4772]: E1128 11:49:10.995157 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:49:22 crc kubenswrapper[4772]: I1128 11:49:22.994768 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:49:22 crc kubenswrapper[4772]: E1128 11:49:22.995820 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:49:23 crc kubenswrapper[4772]: I1128 11:49:23.918497 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lzlf4"] Nov 28 11:49:23 crc kubenswrapper[4772]: I1128 11:49:23.921173 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:23 crc kubenswrapper[4772]: I1128 11:49:23.964869 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lzlf4"] Nov 28 11:49:24 crc kubenswrapper[4772]: I1128 11:49:24.021625 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-728tk\" (UniqueName: \"kubernetes.io/projected/9246e095-020f-4be2-850c-bb63b183b37a-kube-api-access-728tk\") pod \"community-operators-lzlf4\" (UID: \"9246e095-020f-4be2-850c-bb63b183b37a\") " pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:24 crc kubenswrapper[4772]: I1128 11:49:24.021866 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9246e095-020f-4be2-850c-bb63b183b37a-utilities\") pod \"community-operators-lzlf4\" (UID: \"9246e095-020f-4be2-850c-bb63b183b37a\") " pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:24 crc kubenswrapper[4772]: I1128 11:49:24.022166 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9246e095-020f-4be2-850c-bb63b183b37a-catalog-content\") pod \"community-operators-lzlf4\" (UID: \"9246e095-020f-4be2-850c-bb63b183b37a\") " pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:24 crc kubenswrapper[4772]: I1128 11:49:24.123179 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9246e095-020f-4be2-850c-bb63b183b37a-catalog-content\") pod \"community-operators-lzlf4\" (UID: \"9246e095-020f-4be2-850c-bb63b183b37a\") " pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:24 crc kubenswrapper[4772]: I1128 11:49:24.123237 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-728tk\" (UniqueName: \"kubernetes.io/projected/9246e095-020f-4be2-850c-bb63b183b37a-kube-api-access-728tk\") pod \"community-operators-lzlf4\" (UID: \"9246e095-020f-4be2-850c-bb63b183b37a\") " pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:24 crc kubenswrapper[4772]: I1128 11:49:24.123310 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9246e095-020f-4be2-850c-bb63b183b37a-utilities\") pod \"community-operators-lzlf4\" (UID: \"9246e095-020f-4be2-850c-bb63b183b37a\") " pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:24 crc kubenswrapper[4772]: I1128 11:49:24.123848 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9246e095-020f-4be2-850c-bb63b183b37a-utilities\") pod \"community-operators-lzlf4\" (UID: \"9246e095-020f-4be2-850c-bb63b183b37a\") " pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:24 crc kubenswrapper[4772]: I1128 11:49:24.124090 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9246e095-020f-4be2-850c-bb63b183b37a-catalog-content\") pod \"community-operators-lzlf4\" (UID: \"9246e095-020f-4be2-850c-bb63b183b37a\") " pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:24 crc kubenswrapper[4772]: I1128 11:49:24.156170 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-728tk\" (UniqueName: \"kubernetes.io/projected/9246e095-020f-4be2-850c-bb63b183b37a-kube-api-access-728tk\") pod \"community-operators-lzlf4\" (UID: \"9246e095-020f-4be2-850c-bb63b183b37a\") " pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:24 crc kubenswrapper[4772]: I1128 11:49:24.264751 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:24 crc kubenswrapper[4772]: I1128 11:49:24.600922 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lzlf4"] Nov 28 11:49:25 crc kubenswrapper[4772]: I1128 11:49:25.224208 4772 generic.go:334] "Generic (PLEG): container finished" podID="9246e095-020f-4be2-850c-bb63b183b37a" containerID="cd0ee9d05f09878e06a7ee22f03da58e27bbd9aebdc595a69de258ae4284a6cd" exitCode=0 Nov 28 11:49:25 crc kubenswrapper[4772]: I1128 11:49:25.224292 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzlf4" event={"ID":"9246e095-020f-4be2-850c-bb63b183b37a","Type":"ContainerDied","Data":"cd0ee9d05f09878e06a7ee22f03da58e27bbd9aebdc595a69de258ae4284a6cd"} Nov 28 11:49:25 crc kubenswrapper[4772]: I1128 11:49:25.224724 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzlf4" event={"ID":"9246e095-020f-4be2-850c-bb63b183b37a","Type":"ContainerStarted","Data":"7e1375a86cfad37d292db176c6aef941da62550e344266db226a3479c1d4554d"} Nov 28 11:49:27 crc kubenswrapper[4772]: I1128 11:49:27.254325 4772 generic.go:334] "Generic (PLEG): container finished" podID="9246e095-020f-4be2-850c-bb63b183b37a" containerID="2b9a0ff0dbe338c07c286a56fcca405f97ce3322cd6c31085137f26078a01b0d" exitCode=0 Nov 28 11:49:27 crc kubenswrapper[4772]: I1128 11:49:27.254429 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzlf4" event={"ID":"9246e095-020f-4be2-850c-bb63b183b37a","Type":"ContainerDied","Data":"2b9a0ff0dbe338c07c286a56fcca405f97ce3322cd6c31085137f26078a01b0d"} Nov 28 11:49:29 crc kubenswrapper[4772]: I1128 11:49:29.286177 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzlf4" event={"ID":"9246e095-020f-4be2-850c-bb63b183b37a","Type":"ContainerStarted","Data":"9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7"} Nov 28 11:49:29 crc kubenswrapper[4772]: I1128 11:49:29.312599 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lzlf4" podStartSLOduration=3.486725168 podStartE2EDuration="6.312581621s" podCreationTimestamp="2025-11-28 11:49:23 +0000 UTC" firstStartedPulling="2025-11-28 11:49:25.226183125 +0000 UTC m=+2563.549426352" lastFinishedPulling="2025-11-28 11:49:28.052039548 +0000 UTC m=+2566.375282805" observedRunningTime="2025-11-28 11:49:29.309087858 +0000 UTC m=+2567.632331105" watchObservedRunningTime="2025-11-28 11:49:29.312581621 +0000 UTC m=+2567.635824848" Nov 28 11:49:34 crc kubenswrapper[4772]: I1128 11:49:34.265736 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:34 crc kubenswrapper[4772]: I1128 11:49:34.266561 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:34 crc kubenswrapper[4772]: I1128 11:49:34.325427 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:34 crc kubenswrapper[4772]: I1128 11:49:34.410848 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:34 crc kubenswrapper[4772]: I1128 11:49:34.567875 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lzlf4"] Nov 28 11:49:35 crc kubenswrapper[4772]: I1128 11:49:35.994876 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:49:35 crc kubenswrapper[4772]: E1128 11:49:35.995210 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:49:36 crc kubenswrapper[4772]: I1128 11:49:36.364116 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lzlf4" podUID="9246e095-020f-4be2-850c-bb63b183b37a" containerName="registry-server" containerID="cri-o://9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7" gracePeriod=2 Nov 28 11:49:36 crc kubenswrapper[4772]: I1128 11:49:36.876497 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:36 crc kubenswrapper[4772]: I1128 11:49:36.968407 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9246e095-020f-4be2-850c-bb63b183b37a-utilities\") pod \"9246e095-020f-4be2-850c-bb63b183b37a\" (UID: \"9246e095-020f-4be2-850c-bb63b183b37a\") " Nov 28 11:49:36 crc kubenswrapper[4772]: I1128 11:49:36.968466 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-728tk\" (UniqueName: \"kubernetes.io/projected/9246e095-020f-4be2-850c-bb63b183b37a-kube-api-access-728tk\") pod \"9246e095-020f-4be2-850c-bb63b183b37a\" (UID: \"9246e095-020f-4be2-850c-bb63b183b37a\") " Nov 28 11:49:36 crc kubenswrapper[4772]: I1128 11:49:36.968544 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9246e095-020f-4be2-850c-bb63b183b37a-catalog-content\") pod \"9246e095-020f-4be2-850c-bb63b183b37a\" (UID: \"9246e095-020f-4be2-850c-bb63b183b37a\") " Nov 28 11:49:36 crc kubenswrapper[4772]: I1128 11:49:36.970159 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9246e095-020f-4be2-850c-bb63b183b37a-utilities" (OuterVolumeSpecName: "utilities") pod "9246e095-020f-4be2-850c-bb63b183b37a" (UID: "9246e095-020f-4be2-850c-bb63b183b37a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:49:36 crc kubenswrapper[4772]: I1128 11:49:36.975564 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9246e095-020f-4be2-850c-bb63b183b37a-kube-api-access-728tk" (OuterVolumeSpecName: "kube-api-access-728tk") pod "9246e095-020f-4be2-850c-bb63b183b37a" (UID: "9246e095-020f-4be2-850c-bb63b183b37a"). InnerVolumeSpecName "kube-api-access-728tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.017045 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9246e095-020f-4be2-850c-bb63b183b37a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9246e095-020f-4be2-850c-bb63b183b37a" (UID: "9246e095-020f-4be2-850c-bb63b183b37a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.071040 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9246e095-020f-4be2-850c-bb63b183b37a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.071083 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9246e095-020f-4be2-850c-bb63b183b37a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.071094 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-728tk\" (UniqueName: \"kubernetes.io/projected/9246e095-020f-4be2-850c-bb63b183b37a-kube-api-access-728tk\") on node \"crc\" DevicePath \"\"" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.378452 4772 generic.go:334] "Generic (PLEG): container finished" podID="9246e095-020f-4be2-850c-bb63b183b37a" containerID="9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7" exitCode=0 Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.378518 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzlf4" event={"ID":"9246e095-020f-4be2-850c-bb63b183b37a","Type":"ContainerDied","Data":"9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7"} Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.378554 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzlf4" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.378593 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzlf4" event={"ID":"9246e095-020f-4be2-850c-bb63b183b37a","Type":"ContainerDied","Data":"7e1375a86cfad37d292db176c6aef941da62550e344266db226a3479c1d4554d"} Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.378638 4772 scope.go:117] "RemoveContainer" containerID="9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.415543 4772 scope.go:117] "RemoveContainer" containerID="2b9a0ff0dbe338c07c286a56fcca405f97ce3322cd6c31085137f26078a01b0d" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.445159 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lzlf4"] Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.460977 4772 scope.go:117] "RemoveContainer" containerID="cd0ee9d05f09878e06a7ee22f03da58e27bbd9aebdc595a69de258ae4284a6cd" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.463833 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lzlf4"] Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.526935 4772 scope.go:117] "RemoveContainer" containerID="9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7" Nov 28 11:49:37 crc kubenswrapper[4772]: E1128 11:49:37.527953 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7\": container with ID starting with 9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7 not found: ID does not exist" containerID="9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.528082 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7"} err="failed to get container status \"9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7\": rpc error: code = NotFound desc = could not find container \"9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7\": container with ID starting with 9cef93bf4f86b470179d9ad8ff1e12482c484fdbd3f2b8e233ed6e827cb9c0a7 not found: ID does not exist" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.528139 4772 scope.go:117] "RemoveContainer" containerID="2b9a0ff0dbe338c07c286a56fcca405f97ce3322cd6c31085137f26078a01b0d" Nov 28 11:49:37 crc kubenswrapper[4772]: E1128 11:49:37.528799 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b9a0ff0dbe338c07c286a56fcca405f97ce3322cd6c31085137f26078a01b0d\": container with ID starting with 2b9a0ff0dbe338c07c286a56fcca405f97ce3322cd6c31085137f26078a01b0d not found: ID does not exist" containerID="2b9a0ff0dbe338c07c286a56fcca405f97ce3322cd6c31085137f26078a01b0d" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.528846 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9a0ff0dbe338c07c286a56fcca405f97ce3322cd6c31085137f26078a01b0d"} err="failed to get container status \"2b9a0ff0dbe338c07c286a56fcca405f97ce3322cd6c31085137f26078a01b0d\": rpc error: code = NotFound desc = could not find container \"2b9a0ff0dbe338c07c286a56fcca405f97ce3322cd6c31085137f26078a01b0d\": container with ID starting with 2b9a0ff0dbe338c07c286a56fcca405f97ce3322cd6c31085137f26078a01b0d not found: ID does not exist" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.528879 4772 scope.go:117] "RemoveContainer" containerID="cd0ee9d05f09878e06a7ee22f03da58e27bbd9aebdc595a69de258ae4284a6cd" Nov 28 11:49:37 crc kubenswrapper[4772]: E1128 11:49:37.529372 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0ee9d05f09878e06a7ee22f03da58e27bbd9aebdc595a69de258ae4284a6cd\": container with ID starting with cd0ee9d05f09878e06a7ee22f03da58e27bbd9aebdc595a69de258ae4284a6cd not found: ID does not exist" containerID="cd0ee9d05f09878e06a7ee22f03da58e27bbd9aebdc595a69de258ae4284a6cd" Nov 28 11:49:37 crc kubenswrapper[4772]: I1128 11:49:37.529448 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0ee9d05f09878e06a7ee22f03da58e27bbd9aebdc595a69de258ae4284a6cd"} err="failed to get container status \"cd0ee9d05f09878e06a7ee22f03da58e27bbd9aebdc595a69de258ae4284a6cd\": rpc error: code = NotFound desc = could not find container \"cd0ee9d05f09878e06a7ee22f03da58e27bbd9aebdc595a69de258ae4284a6cd\": container with ID starting with cd0ee9d05f09878e06a7ee22f03da58e27bbd9aebdc595a69de258ae4284a6cd not found: ID does not exist" Nov 28 11:49:38 crc kubenswrapper[4772]: I1128 11:49:38.013833 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9246e095-020f-4be2-850c-bb63b183b37a" path="/var/lib/kubelet/pods/9246e095-020f-4be2-850c-bb63b183b37a/volumes" Nov 28 11:49:43 crc kubenswrapper[4772]: I1128 11:49:43.480174 4772 generic.go:334] "Generic (PLEG): container finished" podID="3bcf40ed-6681-4685-8277-c31e223c9686" containerID="32d3cb64869e028aa2de368793caa045ddf7060f2b5658cc4a3db6a1c2e19349" exitCode=0 Nov 28 11:49:43 crc kubenswrapper[4772]: I1128 11:49:43.480419 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" event={"ID":"3bcf40ed-6681-4685-8277-c31e223c9686","Type":"ContainerDied","Data":"32d3cb64869e028aa2de368793caa045ddf7060f2b5658cc4a3db6a1c2e19349"} Nov 28 11:49:44 crc kubenswrapper[4772]: I1128 11:49:44.942761 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.060048 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckbrr\" (UniqueName: \"kubernetes.io/projected/3bcf40ed-6681-4685-8277-c31e223c9686-kube-api-access-ckbrr\") pod \"3bcf40ed-6681-4685-8277-c31e223c9686\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.060140 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-combined-ca-bundle\") pod \"3bcf40ed-6681-4685-8277-c31e223c9686\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.060339 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-ssh-key\") pod \"3bcf40ed-6681-4685-8277-c31e223c9686\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.060439 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-cell1-compute-config-1\") pod \"3bcf40ed-6681-4685-8277-c31e223c9686\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.060493 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-migration-ssh-key-1\") pod \"3bcf40ed-6681-4685-8277-c31e223c9686\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.060537 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-migration-ssh-key-0\") pod \"3bcf40ed-6681-4685-8277-c31e223c9686\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.060637 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-cell1-compute-config-0\") pod \"3bcf40ed-6681-4685-8277-c31e223c9686\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.061390 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-inventory\") pod \"3bcf40ed-6681-4685-8277-c31e223c9686\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.061462 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3bcf40ed-6681-4685-8277-c31e223c9686-nova-extra-config-0\") pod \"3bcf40ed-6681-4685-8277-c31e223c9686\" (UID: \"3bcf40ed-6681-4685-8277-c31e223c9686\") " Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.069572 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3bcf40ed-6681-4685-8277-c31e223c9686" (UID: "3bcf40ed-6681-4685-8277-c31e223c9686"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.072498 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcf40ed-6681-4685-8277-c31e223c9686-kube-api-access-ckbrr" (OuterVolumeSpecName: "kube-api-access-ckbrr") pod "3bcf40ed-6681-4685-8277-c31e223c9686" (UID: "3bcf40ed-6681-4685-8277-c31e223c9686"). InnerVolumeSpecName "kube-api-access-ckbrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.095474 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3bcf40ed-6681-4685-8277-c31e223c9686" (UID: "3bcf40ed-6681-4685-8277-c31e223c9686"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.101300 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "3bcf40ed-6681-4685-8277-c31e223c9686" (UID: "3bcf40ed-6681-4685-8277-c31e223c9686"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.103017 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bcf40ed-6681-4685-8277-c31e223c9686-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "3bcf40ed-6681-4685-8277-c31e223c9686" (UID: "3bcf40ed-6681-4685-8277-c31e223c9686"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.103044 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "3bcf40ed-6681-4685-8277-c31e223c9686" (UID: "3bcf40ed-6681-4685-8277-c31e223c9686"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.105116 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "3bcf40ed-6681-4685-8277-c31e223c9686" (UID: "3bcf40ed-6681-4685-8277-c31e223c9686"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.110748 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "3bcf40ed-6681-4685-8277-c31e223c9686" (UID: "3bcf40ed-6681-4685-8277-c31e223c9686"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.114151 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-inventory" (OuterVolumeSpecName: "inventory") pod "3bcf40ed-6681-4685-8277-c31e223c9686" (UID: "3bcf40ed-6681-4685-8277-c31e223c9686"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.164717 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.164752 4772 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.164767 4772 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.164779 4772 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.164791 4772 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.164806 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.164819 4772 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3bcf40ed-6681-4685-8277-c31e223c9686-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.164830 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckbrr\" (UniqueName: \"kubernetes.io/projected/3bcf40ed-6681-4685-8277-c31e223c9686-kube-api-access-ckbrr\") on node \"crc\" DevicePath \"\"" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.164841 4772 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcf40ed-6681-4685-8277-c31e223c9686-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.510930 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" event={"ID":"3bcf40ed-6681-4685-8277-c31e223c9686","Type":"ContainerDied","Data":"691fc7793e7c8e6c9926f3e87c0d366885d960dee88b913a6e5d8e4d15e88b49"} Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.511006 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="691fc7793e7c8e6c9926f3e87c0d366885d960dee88b913a6e5d8e4d15e88b49" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.511137 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pl4v2" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.650329 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh"] Nov 28 11:49:45 crc kubenswrapper[4772]: E1128 11:49:45.651308 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9246e095-020f-4be2-850c-bb63b183b37a" containerName="extract-content" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.651327 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9246e095-020f-4be2-850c-bb63b183b37a" containerName="extract-content" Nov 28 11:49:45 crc kubenswrapper[4772]: E1128 11:49:45.651446 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9246e095-020f-4be2-850c-bb63b183b37a" containerName="extract-utilities" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.651456 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9246e095-020f-4be2-850c-bb63b183b37a" containerName="extract-utilities" Nov 28 11:49:45 crc kubenswrapper[4772]: E1128 11:49:45.651494 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9246e095-020f-4be2-850c-bb63b183b37a" containerName="registry-server" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.651503 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9246e095-020f-4be2-850c-bb63b183b37a" containerName="registry-server" Nov 28 11:49:45 crc kubenswrapper[4772]: E1128 11:49:45.651518 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bcf40ed-6681-4685-8277-c31e223c9686" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.651527 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bcf40ed-6681-4685-8277-c31e223c9686" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.651757 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bcf40ed-6681-4685-8277-c31e223c9686" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.651791 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9246e095-020f-4be2-850c-bb63b183b37a" containerName="registry-server" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.652613 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.668200 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.668244 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-snbg7" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.668467 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.669197 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.669198 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.676589 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh"] Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.778229 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.778733 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.778775 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.778844 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.779020 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.779376 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.779622 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdx2k\" (UniqueName: \"kubernetes.io/projected/bed1e099-94a0-45ab-9686-4488e1df9252-kube-api-access-kdx2k\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.881889 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.881943 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.881984 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.882015 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.882077 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.882932 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdx2k\" (UniqueName: \"kubernetes.io/projected/bed1e099-94a0-45ab-9686-4488e1df9252-kube-api-access-kdx2k\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.882969 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.887219 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.888547 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.888967 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.889407 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.890034 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.901818 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.905781 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdx2k\" (UniqueName: \"kubernetes.io/projected/bed1e099-94a0-45ab-9686-4488e1df9252-kube-api-access-kdx2k\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9srrh\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:45 crc kubenswrapper[4772]: I1128 11:49:45.993216 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:49:46 crc kubenswrapper[4772]: I1128 11:49:46.560815 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh"] Nov 28 11:49:46 crc kubenswrapper[4772]: I1128 11:49:46.565094 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 11:49:47 crc kubenswrapper[4772]: I1128 11:49:47.542823 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" event={"ID":"bed1e099-94a0-45ab-9686-4488e1df9252","Type":"ContainerStarted","Data":"38ced6cd652f7535c00ac39a3dd8630957c45cef905a6e36fde416b07da703fa"} Nov 28 11:49:48 crc kubenswrapper[4772]: I1128 11:49:48.553578 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" event={"ID":"bed1e099-94a0-45ab-9686-4488e1df9252","Type":"ContainerStarted","Data":"500d8108a39a52ea4bbccb1899446ebb2497fc6fac131d4747f8454274202853"} Nov 28 11:49:48 crc kubenswrapper[4772]: I1128 11:49:48.591793 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" podStartSLOduration=2.614404837 podStartE2EDuration="3.591772111s" podCreationTimestamp="2025-11-28 11:49:45 +0000 UTC" firstStartedPulling="2025-11-28 11:49:46.5647437 +0000 UTC m=+2584.887986947" lastFinishedPulling="2025-11-28 11:49:47.542110954 +0000 UTC m=+2585.865354221" observedRunningTime="2025-11-28 11:49:48.580657455 +0000 UTC m=+2586.903900712" watchObservedRunningTime="2025-11-28 11:49:48.591772111 +0000 UTC m=+2586.915015348" Nov 28 11:49:48 crc kubenswrapper[4772]: I1128 11:49:48.994601 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:49:48 crc kubenswrapper[4772]: E1128 11:49:48.995211 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:49:59 crc kubenswrapper[4772]: I1128 11:49:59.994506 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:49:59 crc kubenswrapper[4772]: E1128 11:49:59.995300 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:50:12 crc kubenswrapper[4772]: I1128 11:50:12.014062 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:50:12 crc kubenswrapper[4772]: E1128 11:50:12.015797 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:50:23 crc kubenswrapper[4772]: I1128 11:50:23.994896 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:50:23 crc kubenswrapper[4772]: E1128 11:50:23.995963 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:50:35 crc kubenswrapper[4772]: I1128 11:50:35.994520 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:50:35 crc kubenswrapper[4772]: E1128 11:50:35.995312 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:50:46 crc kubenswrapper[4772]: I1128 11:50:46.994805 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:50:46 crc kubenswrapper[4772]: E1128 11:50:46.997833 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:50:58 crc kubenswrapper[4772]: I1128 11:50:58.995819 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:50:58 crc kubenswrapper[4772]: E1128 11:50:58.997076 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:51:12 crc kubenswrapper[4772]: I1128 11:51:12.994953 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:51:12 crc kubenswrapper[4772]: E1128 11:51:12.995936 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:51:24 crc kubenswrapper[4772]: I1128 11:51:24.995027 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:51:24 crc kubenswrapper[4772]: E1128 11:51:24.996086 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:51:38 crc kubenswrapper[4772]: I1128 11:51:38.994639 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:51:38 crc kubenswrapper[4772]: E1128 11:51:38.995501 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:51:52 crc kubenswrapper[4772]: I1128 11:51:52.003005 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:51:52 crc kubenswrapper[4772]: E1128 11:51:52.004191 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:52:02 crc kubenswrapper[4772]: I1128 11:52:02.994723 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:52:04 crc kubenswrapper[4772]: I1128 11:52:04.098451 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"4f528da92aae59ffe428941fe10740aafb66385b3cf3a7e16a99055a6e05cabe"} Nov 28 11:52:34 crc kubenswrapper[4772]: I1128 11:52:34.442269 4772 generic.go:334] "Generic (PLEG): container finished" podID="bed1e099-94a0-45ab-9686-4488e1df9252" containerID="500d8108a39a52ea4bbccb1899446ebb2497fc6fac131d4747f8454274202853" exitCode=0 Nov 28 11:52:34 crc kubenswrapper[4772]: I1128 11:52:34.442468 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" event={"ID":"bed1e099-94a0-45ab-9686-4488e1df9252","Type":"ContainerDied","Data":"500d8108a39a52ea4bbccb1899446ebb2497fc6fac131d4747f8454274202853"} Nov 28 11:52:35 crc kubenswrapper[4772]: I1128 11:52:35.956391 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.099254 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-telemetry-combined-ca-bundle\") pod \"bed1e099-94a0-45ab-9686-4488e1df9252\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.099397 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-1\") pod \"bed1e099-94a0-45ab-9686-4488e1df9252\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.099511 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ssh-key\") pod \"bed1e099-94a0-45ab-9686-4488e1df9252\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.099574 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-0\") pod \"bed1e099-94a0-45ab-9686-4488e1df9252\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.099705 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-inventory\") pod \"bed1e099-94a0-45ab-9686-4488e1df9252\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.099852 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-2\") pod \"bed1e099-94a0-45ab-9686-4488e1df9252\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.099910 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdx2k\" (UniqueName: \"kubernetes.io/projected/bed1e099-94a0-45ab-9686-4488e1df9252-kube-api-access-kdx2k\") pod \"bed1e099-94a0-45ab-9686-4488e1df9252\" (UID: \"bed1e099-94a0-45ab-9686-4488e1df9252\") " Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.107348 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed1e099-94a0-45ab-9686-4488e1df9252-kube-api-access-kdx2k" (OuterVolumeSpecName: "kube-api-access-kdx2k") pod "bed1e099-94a0-45ab-9686-4488e1df9252" (UID: "bed1e099-94a0-45ab-9686-4488e1df9252"). InnerVolumeSpecName "kube-api-access-kdx2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.108092 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bed1e099-94a0-45ab-9686-4488e1df9252" (UID: "bed1e099-94a0-45ab-9686-4488e1df9252"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.130001 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "bed1e099-94a0-45ab-9686-4488e1df9252" (UID: "bed1e099-94a0-45ab-9686-4488e1df9252"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.130389 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "bed1e099-94a0-45ab-9686-4488e1df9252" (UID: "bed1e099-94a0-45ab-9686-4488e1df9252"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.144680 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-inventory" (OuterVolumeSpecName: "inventory") pod "bed1e099-94a0-45ab-9686-4488e1df9252" (UID: "bed1e099-94a0-45ab-9686-4488e1df9252"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.151153 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bed1e099-94a0-45ab-9686-4488e1df9252" (UID: "bed1e099-94a0-45ab-9686-4488e1df9252"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.160880 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "bed1e099-94a0-45ab-9686-4488e1df9252" (UID: "bed1e099-94a0-45ab-9686-4488e1df9252"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.202436 4772 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.204126 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.204233 4772 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.204294 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdx2k\" (UniqueName: \"kubernetes.io/projected/bed1e099-94a0-45ab-9686-4488e1df9252-kube-api-access-kdx2k\") on node \"crc\" DevicePath \"\"" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.204319 4772 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.204338 4772 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.204382 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bed1e099-94a0-45ab-9686-4488e1df9252-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.474688 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" event={"ID":"bed1e099-94a0-45ab-9686-4488e1df9252","Type":"ContainerDied","Data":"38ced6cd652f7535c00ac39a3dd8630957c45cef905a6e36fde416b07da703fa"} Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.474767 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9srrh" Nov 28 11:52:36 crc kubenswrapper[4772]: I1128 11:52:36.474776 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38ced6cd652f7535c00ac39a3dd8630957c45cef905a6e36fde416b07da703fa" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.674558 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 11:53:17 crc kubenswrapper[4772]: E1128 11:53:17.676232 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed1e099-94a0-45ab-9686-4488e1df9252" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.676261 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed1e099-94a0-45ab-9686-4488e1df9252" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.676725 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed1e099-94a0-45ab-9686-4488e1df9252" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.679053 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.683327 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.683860 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.683868 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.683874 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bx5kx" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.683985 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.779446 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39592588-10c2-45fd-88fb-cb63f200c871-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.779544 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/39592588-10c2-45fd-88fb-cb63f200c871-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.779600 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/39592588-10c2-45fd-88fb-cb63f200c871-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.779677 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39592588-10c2-45fd-88fb-cb63f200c871-config-data\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.779785 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.779835 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.779915 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.780105 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm4w4\" (UniqueName: \"kubernetes.io/projected/39592588-10c2-45fd-88fb-cb63f200c871-kube-api-access-jm4w4\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.780176 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.882860 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm4w4\" (UniqueName: \"kubernetes.io/projected/39592588-10c2-45fd-88fb-cb63f200c871-kube-api-access-jm4w4\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.882970 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.883123 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39592588-10c2-45fd-88fb-cb63f200c871-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.883185 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/39592588-10c2-45fd-88fb-cb63f200c871-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.883239 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/39592588-10c2-45fd-88fb-cb63f200c871-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.883284 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39592588-10c2-45fd-88fb-cb63f200c871-config-data\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.883440 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.883509 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.883590 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.883639 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.884093 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/39592588-10c2-45fd-88fb-cb63f200c871-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.884807 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39592588-10c2-45fd-88fb-cb63f200c871-config-data\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.884956 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/39592588-10c2-45fd-88fb-cb63f200c871-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.885135 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39592588-10c2-45fd-88fb-cb63f200c871-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.894209 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.894241 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.894877 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.904914 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm4w4\" (UniqueName: \"kubernetes.io/projected/39592588-10c2-45fd-88fb-cb63f200c871-kube-api-access-jm4w4\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:17 crc kubenswrapper[4772]: I1128 11:53:17.931244 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " pod="openstack/tempest-tests-tempest" Nov 28 11:53:18 crc kubenswrapper[4772]: I1128 11:53:18.014905 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 11:53:18 crc kubenswrapper[4772]: I1128 11:53:18.694925 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 11:53:18 crc kubenswrapper[4772]: I1128 11:53:18.939130 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"39592588-10c2-45fd-88fb-cb63f200c871","Type":"ContainerStarted","Data":"6beb57b44bbce4109fd95cf04b431ca3d5c77669b690373a04b30b99bcae408d"} Nov 28 11:53:32 crc kubenswrapper[4772]: I1128 11:53:32.629511 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rnwvz"] Nov 28 11:53:32 crc kubenswrapper[4772]: I1128 11:53:32.635574 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:53:32 crc kubenswrapper[4772]: I1128 11:53:32.641879 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnwvz"] Nov 28 11:53:32 crc kubenswrapper[4772]: I1128 11:53:32.839684 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9555c331-28ed-4867-ba89-4866ea9acb64-catalog-content\") pod \"certified-operators-rnwvz\" (UID: \"9555c331-28ed-4867-ba89-4866ea9acb64\") " pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:53:32 crc kubenswrapper[4772]: I1128 11:53:32.839798 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dpvq\" (UniqueName: \"kubernetes.io/projected/9555c331-28ed-4867-ba89-4866ea9acb64-kube-api-access-4dpvq\") pod \"certified-operators-rnwvz\" (UID: \"9555c331-28ed-4867-ba89-4866ea9acb64\") " pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:53:32 crc kubenswrapper[4772]: I1128 11:53:32.839937 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9555c331-28ed-4867-ba89-4866ea9acb64-utilities\") pod \"certified-operators-rnwvz\" (UID: \"9555c331-28ed-4867-ba89-4866ea9acb64\") " pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:53:32 crc kubenswrapper[4772]: I1128 11:53:32.941824 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9555c331-28ed-4867-ba89-4866ea9acb64-utilities\") pod \"certified-operators-rnwvz\" (UID: \"9555c331-28ed-4867-ba89-4866ea9acb64\") " pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:53:32 crc kubenswrapper[4772]: I1128 11:53:32.941939 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9555c331-28ed-4867-ba89-4866ea9acb64-catalog-content\") pod \"certified-operators-rnwvz\" (UID: \"9555c331-28ed-4867-ba89-4866ea9acb64\") " pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:53:32 crc kubenswrapper[4772]: I1128 11:53:32.941972 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dpvq\" (UniqueName: \"kubernetes.io/projected/9555c331-28ed-4867-ba89-4866ea9acb64-kube-api-access-4dpvq\") pod \"certified-operators-rnwvz\" (UID: \"9555c331-28ed-4867-ba89-4866ea9acb64\") " pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:53:32 crc kubenswrapper[4772]: I1128 11:53:32.942401 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9555c331-28ed-4867-ba89-4866ea9acb64-utilities\") pod \"certified-operators-rnwvz\" (UID: \"9555c331-28ed-4867-ba89-4866ea9acb64\") " pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:53:32 crc kubenswrapper[4772]: I1128 11:53:32.942769 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9555c331-28ed-4867-ba89-4866ea9acb64-catalog-content\") pod \"certified-operators-rnwvz\" (UID: \"9555c331-28ed-4867-ba89-4866ea9acb64\") " pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:53:32 crc kubenswrapper[4772]: I1128 11:53:32.966532 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dpvq\" (UniqueName: \"kubernetes.io/projected/9555c331-28ed-4867-ba89-4866ea9acb64-kube-api-access-4dpvq\") pod \"certified-operators-rnwvz\" (UID: \"9555c331-28ed-4867-ba89-4866ea9acb64\") " pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:53:33 crc kubenswrapper[4772]: I1128 11:53:33.256341 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:53:51 crc kubenswrapper[4772]: E1128 11:53:51.117761 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 28 11:53:51 crc kubenswrapper[4772]: E1128 11:53:51.118548 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jm4w4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(39592588-10c2-45fd-88fb-cb63f200c871): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 11:53:51 crc kubenswrapper[4772]: E1128 11:53:51.119828 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="39592588-10c2-45fd-88fb-cb63f200c871" Nov 28 11:53:51 crc kubenswrapper[4772]: I1128 11:53:51.274160 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnwvz"] Nov 28 11:53:51 crc kubenswrapper[4772]: W1128 11:53:51.287661 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9555c331_28ed_4867_ba89_4866ea9acb64.slice/crio-15b5314a7533e67b1bcd7c485d3511a7c4017acd295bc125f4b8091fd232da89 WatchSource:0}: Error finding container 15b5314a7533e67b1bcd7c485d3511a7c4017acd295bc125f4b8091fd232da89: Status 404 returned error can't find the container with id 15b5314a7533e67b1bcd7c485d3511a7c4017acd295bc125f4b8091fd232da89 Nov 28 11:53:51 crc kubenswrapper[4772]: E1128 11:53:51.290847 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="39592588-10c2-45fd-88fb-cb63f200c871" Nov 28 11:53:52 crc kubenswrapper[4772]: I1128 11:53:52.302715 4772 generic.go:334] "Generic (PLEG): container finished" podID="9555c331-28ed-4867-ba89-4866ea9acb64" containerID="9782b00489761fe105e72349ca823bb03d0cb5864600b6a91feed43b459eb1a3" exitCode=0 Nov 28 11:53:52 crc kubenswrapper[4772]: I1128 11:53:52.302846 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwvz" event={"ID":"9555c331-28ed-4867-ba89-4866ea9acb64","Type":"ContainerDied","Data":"9782b00489761fe105e72349ca823bb03d0cb5864600b6a91feed43b459eb1a3"} Nov 28 11:53:52 crc kubenswrapper[4772]: I1128 11:53:52.303252 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwvz" event={"ID":"9555c331-28ed-4867-ba89-4866ea9acb64","Type":"ContainerStarted","Data":"15b5314a7533e67b1bcd7c485d3511a7c4017acd295bc125f4b8091fd232da89"} Nov 28 11:53:54 crc kubenswrapper[4772]: I1128 11:53:54.330144 4772 generic.go:334] "Generic (PLEG): container finished" podID="9555c331-28ed-4867-ba89-4866ea9acb64" containerID="de553c7722c148fa623ba5098a0e1b2898cc2094668e7ec83d2aa9912c65fabd" exitCode=0 Nov 28 11:53:54 crc kubenswrapper[4772]: I1128 11:53:54.330269 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwvz" event={"ID":"9555c331-28ed-4867-ba89-4866ea9acb64","Type":"ContainerDied","Data":"de553c7722c148fa623ba5098a0e1b2898cc2094668e7ec83d2aa9912c65fabd"} Nov 28 11:53:55 crc kubenswrapper[4772]: I1128 11:53:55.343512 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwvz" event={"ID":"9555c331-28ed-4867-ba89-4866ea9acb64","Type":"ContainerStarted","Data":"7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26"} Nov 28 11:53:55 crc kubenswrapper[4772]: I1128 11:53:55.373054 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rnwvz" podStartSLOduration=20.780346319 podStartE2EDuration="23.373007016s" podCreationTimestamp="2025-11-28 11:53:32 +0000 UTC" firstStartedPulling="2025-11-28 11:53:52.305846969 +0000 UTC m=+2830.629090196" lastFinishedPulling="2025-11-28 11:53:54.898507626 +0000 UTC m=+2833.221750893" observedRunningTime="2025-11-28 11:53:55.363671107 +0000 UTC m=+2833.686914354" watchObservedRunningTime="2025-11-28 11:53:55.373007016 +0000 UTC m=+2833.696250263" Nov 28 11:54:03 crc kubenswrapper[4772]: I1128 11:54:03.257397 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:54:03 crc kubenswrapper[4772]: I1128 11:54:03.257832 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:54:03 crc kubenswrapper[4772]: I1128 11:54:03.362224 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:54:03 crc kubenswrapper[4772]: I1128 11:54:03.466861 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:54:03 crc kubenswrapper[4772]: I1128 11:54:03.823319 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnwvz"] Nov 28 11:54:03 crc kubenswrapper[4772]: I1128 11:54:03.866996 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 28 11:54:05 crc kubenswrapper[4772]: I1128 11:54:05.443433 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"39592588-10c2-45fd-88fb-cb63f200c871","Type":"ContainerStarted","Data":"3768cd9511fb04c58cf75995b759bd2b8e5c4a3bb16e559199f85046519d287b"} Nov 28 11:54:05 crc kubenswrapper[4772]: I1128 11:54:05.443638 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rnwvz" podUID="9555c331-28ed-4867-ba89-4866ea9acb64" containerName="registry-server" containerID="cri-o://7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26" gracePeriod=2 Nov 28 11:54:05 crc kubenswrapper[4772]: I1128 11:54:05.499017 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.34196228 podStartE2EDuration="49.498996403s" podCreationTimestamp="2025-11-28 11:53:16 +0000 UTC" firstStartedPulling="2025-11-28 11:53:18.707428935 +0000 UTC m=+2797.030672192" lastFinishedPulling="2025-11-28 11:54:03.864463088 +0000 UTC m=+2842.187706315" observedRunningTime="2025-11-28 11:54:05.481271571 +0000 UTC m=+2843.804514828" watchObservedRunningTime="2025-11-28 11:54:05.498996403 +0000 UTC m=+2843.822239630" Nov 28 11:54:05 crc kubenswrapper[4772]: I1128 11:54:05.940935 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.054935 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dpvq\" (UniqueName: \"kubernetes.io/projected/9555c331-28ed-4867-ba89-4866ea9acb64-kube-api-access-4dpvq\") pod \"9555c331-28ed-4867-ba89-4866ea9acb64\" (UID: \"9555c331-28ed-4867-ba89-4866ea9acb64\") " Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.055049 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9555c331-28ed-4867-ba89-4866ea9acb64-catalog-content\") pod \"9555c331-28ed-4867-ba89-4866ea9acb64\" (UID: \"9555c331-28ed-4867-ba89-4866ea9acb64\") " Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.055273 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9555c331-28ed-4867-ba89-4866ea9acb64-utilities\") pod \"9555c331-28ed-4867-ba89-4866ea9acb64\" (UID: \"9555c331-28ed-4867-ba89-4866ea9acb64\") " Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.056196 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9555c331-28ed-4867-ba89-4866ea9acb64-utilities" (OuterVolumeSpecName: "utilities") pod "9555c331-28ed-4867-ba89-4866ea9acb64" (UID: "9555c331-28ed-4867-ba89-4866ea9acb64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.056625 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9555c331-28ed-4867-ba89-4866ea9acb64-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.071203 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9555c331-28ed-4867-ba89-4866ea9acb64-kube-api-access-4dpvq" (OuterVolumeSpecName: "kube-api-access-4dpvq") pod "9555c331-28ed-4867-ba89-4866ea9acb64" (UID: "9555c331-28ed-4867-ba89-4866ea9acb64"). InnerVolumeSpecName "kube-api-access-4dpvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.118277 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9555c331-28ed-4867-ba89-4866ea9acb64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9555c331-28ed-4867-ba89-4866ea9acb64" (UID: "9555c331-28ed-4867-ba89-4866ea9acb64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.158028 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dpvq\" (UniqueName: \"kubernetes.io/projected/9555c331-28ed-4867-ba89-4866ea9acb64-kube-api-access-4dpvq\") on node \"crc\" DevicePath \"\"" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.158080 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9555c331-28ed-4867-ba89-4866ea9acb64-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.454321 4772 generic.go:334] "Generic (PLEG): container finished" podID="9555c331-28ed-4867-ba89-4866ea9acb64" containerID="7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26" exitCode=0 Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.454433 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwvz" event={"ID":"9555c331-28ed-4867-ba89-4866ea9acb64","Type":"ContainerDied","Data":"7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26"} Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.454484 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwvz" event={"ID":"9555c331-28ed-4867-ba89-4866ea9acb64","Type":"ContainerDied","Data":"15b5314a7533e67b1bcd7c485d3511a7c4017acd295bc125f4b8091fd232da89"} Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.454519 4772 scope.go:117] "RemoveContainer" containerID="7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.454752 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnwvz" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.501599 4772 scope.go:117] "RemoveContainer" containerID="de553c7722c148fa623ba5098a0e1b2898cc2094668e7ec83d2aa9912c65fabd" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.512110 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnwvz"] Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.522265 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rnwvz"] Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.525266 4772 scope.go:117] "RemoveContainer" containerID="9782b00489761fe105e72349ca823bb03d0cb5864600b6a91feed43b459eb1a3" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.595295 4772 scope.go:117] "RemoveContainer" containerID="7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26" Nov 28 11:54:06 crc kubenswrapper[4772]: E1128 11:54:06.595985 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26\": container with ID starting with 7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26 not found: ID does not exist" containerID="7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.596046 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26"} err="failed to get container status \"7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26\": rpc error: code = NotFound desc = could not find container \"7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26\": container with ID starting with 7669e815daee8b6cc0585f89be0f1c5c87c59de6fc16b9c543aac3b80bcc5e26 not found: ID does not exist" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.596085 4772 scope.go:117] "RemoveContainer" containerID="de553c7722c148fa623ba5098a0e1b2898cc2094668e7ec83d2aa9912c65fabd" Nov 28 11:54:06 crc kubenswrapper[4772]: E1128 11:54:06.596723 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de553c7722c148fa623ba5098a0e1b2898cc2094668e7ec83d2aa9912c65fabd\": container with ID starting with de553c7722c148fa623ba5098a0e1b2898cc2094668e7ec83d2aa9912c65fabd not found: ID does not exist" containerID="de553c7722c148fa623ba5098a0e1b2898cc2094668e7ec83d2aa9912c65fabd" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.596762 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de553c7722c148fa623ba5098a0e1b2898cc2094668e7ec83d2aa9912c65fabd"} err="failed to get container status \"de553c7722c148fa623ba5098a0e1b2898cc2094668e7ec83d2aa9912c65fabd\": rpc error: code = NotFound desc = could not find container \"de553c7722c148fa623ba5098a0e1b2898cc2094668e7ec83d2aa9912c65fabd\": container with ID starting with de553c7722c148fa623ba5098a0e1b2898cc2094668e7ec83d2aa9912c65fabd not found: ID does not exist" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.596785 4772 scope.go:117] "RemoveContainer" containerID="9782b00489761fe105e72349ca823bb03d0cb5864600b6a91feed43b459eb1a3" Nov 28 11:54:06 crc kubenswrapper[4772]: E1128 11:54:06.597153 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9782b00489761fe105e72349ca823bb03d0cb5864600b6a91feed43b459eb1a3\": container with ID starting with 9782b00489761fe105e72349ca823bb03d0cb5864600b6a91feed43b459eb1a3 not found: ID does not exist" containerID="9782b00489761fe105e72349ca823bb03d0cb5864600b6a91feed43b459eb1a3" Nov 28 11:54:06 crc kubenswrapper[4772]: I1128 11:54:06.597186 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9782b00489761fe105e72349ca823bb03d0cb5864600b6a91feed43b459eb1a3"} err="failed to get container status \"9782b00489761fe105e72349ca823bb03d0cb5864600b6a91feed43b459eb1a3\": rpc error: code = NotFound desc = could not find container \"9782b00489761fe105e72349ca823bb03d0cb5864600b6a91feed43b459eb1a3\": container with ID starting with 9782b00489761fe105e72349ca823bb03d0cb5864600b6a91feed43b459eb1a3 not found: ID does not exist" Nov 28 11:54:08 crc kubenswrapper[4772]: I1128 11:54:08.007271 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9555c331-28ed-4867-ba89-4866ea9acb64" path="/var/lib/kubelet/pods/9555c331-28ed-4867-ba89-4866ea9acb64/volumes" Nov 28 11:54:23 crc kubenswrapper[4772]: I1128 11:54:23.896231 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:54:23 crc kubenswrapper[4772]: I1128 11:54:23.896883 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:54:53 crc kubenswrapper[4772]: I1128 11:54:53.897039 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:54:53 crc kubenswrapper[4772]: I1128 11:54:53.898044 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:55:03 crc kubenswrapper[4772]: I1128 11:55:03.828130 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jjvvx"] Nov 28 11:55:03 crc kubenswrapper[4772]: E1128 11:55:03.829266 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9555c331-28ed-4867-ba89-4866ea9acb64" containerName="extract-content" Nov 28 11:55:03 crc kubenswrapper[4772]: I1128 11:55:03.829284 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9555c331-28ed-4867-ba89-4866ea9acb64" containerName="extract-content" Nov 28 11:55:03 crc kubenswrapper[4772]: E1128 11:55:03.829312 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9555c331-28ed-4867-ba89-4866ea9acb64" containerName="extract-utilities" Nov 28 11:55:03 crc kubenswrapper[4772]: I1128 11:55:03.829322 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9555c331-28ed-4867-ba89-4866ea9acb64" containerName="extract-utilities" Nov 28 11:55:03 crc kubenswrapper[4772]: E1128 11:55:03.829336 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9555c331-28ed-4867-ba89-4866ea9acb64" containerName="registry-server" Nov 28 11:55:03 crc kubenswrapper[4772]: I1128 11:55:03.829346 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9555c331-28ed-4867-ba89-4866ea9acb64" containerName="registry-server" Nov 28 11:55:03 crc kubenswrapper[4772]: I1128 11:55:03.829665 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9555c331-28ed-4867-ba89-4866ea9acb64" containerName="registry-server" Nov 28 11:55:03 crc kubenswrapper[4772]: I1128 11:55:03.831413 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:03 crc kubenswrapper[4772]: I1128 11:55:03.841480 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jjvvx"] Nov 28 11:55:03 crc kubenswrapper[4772]: I1128 11:55:03.960161 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4eac3a7-8d69-4cd6-947c-66b56eff7587-catalog-content\") pod \"redhat-operators-jjvvx\" (UID: \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\") " pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:03 crc kubenswrapper[4772]: I1128 11:55:03.960297 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jkpl\" (UniqueName: \"kubernetes.io/projected/c4eac3a7-8d69-4cd6-947c-66b56eff7587-kube-api-access-7jkpl\") pod \"redhat-operators-jjvvx\" (UID: \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\") " pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:03 crc kubenswrapper[4772]: I1128 11:55:03.960341 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4eac3a7-8d69-4cd6-947c-66b56eff7587-utilities\") pod \"redhat-operators-jjvvx\" (UID: \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\") " pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:04 crc kubenswrapper[4772]: I1128 11:55:04.061793 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jkpl\" (UniqueName: \"kubernetes.io/projected/c4eac3a7-8d69-4cd6-947c-66b56eff7587-kube-api-access-7jkpl\") pod \"redhat-operators-jjvvx\" (UID: \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\") " pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:04 crc kubenswrapper[4772]: I1128 11:55:04.061868 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4eac3a7-8d69-4cd6-947c-66b56eff7587-utilities\") pod \"redhat-operators-jjvvx\" (UID: \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\") " pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:04 crc kubenswrapper[4772]: I1128 11:55:04.061985 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4eac3a7-8d69-4cd6-947c-66b56eff7587-catalog-content\") pod \"redhat-operators-jjvvx\" (UID: \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\") " pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:04 crc kubenswrapper[4772]: I1128 11:55:04.062919 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4eac3a7-8d69-4cd6-947c-66b56eff7587-catalog-content\") pod \"redhat-operators-jjvvx\" (UID: \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\") " pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:04 crc kubenswrapper[4772]: I1128 11:55:04.063507 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4eac3a7-8d69-4cd6-947c-66b56eff7587-utilities\") pod \"redhat-operators-jjvvx\" (UID: \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\") " pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:04 crc kubenswrapper[4772]: I1128 11:55:04.090323 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jkpl\" (UniqueName: \"kubernetes.io/projected/c4eac3a7-8d69-4cd6-947c-66b56eff7587-kube-api-access-7jkpl\") pod \"redhat-operators-jjvvx\" (UID: \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\") " pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:04 crc kubenswrapper[4772]: I1128 11:55:04.161857 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:04 crc kubenswrapper[4772]: I1128 11:55:04.678333 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jjvvx"] Nov 28 11:55:05 crc kubenswrapper[4772]: I1128 11:55:05.104953 4772 generic.go:334] "Generic (PLEG): container finished" podID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" containerID="912c341c4a583f50c4e78ed5925687058bd5d75c4f71923ae343cbaa665de0da" exitCode=0 Nov 28 11:55:05 crc kubenswrapper[4772]: I1128 11:55:05.105060 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jjvvx" event={"ID":"c4eac3a7-8d69-4cd6-947c-66b56eff7587","Type":"ContainerDied","Data":"912c341c4a583f50c4e78ed5925687058bd5d75c4f71923ae343cbaa665de0da"} Nov 28 11:55:05 crc kubenswrapper[4772]: I1128 11:55:05.106466 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jjvvx" event={"ID":"c4eac3a7-8d69-4cd6-947c-66b56eff7587","Type":"ContainerStarted","Data":"ce60c257ad067d87babae82805e7505bd36050ca7afba70f8abf2f3aa5fb80b3"} Nov 28 11:55:05 crc kubenswrapper[4772]: I1128 11:55:05.106680 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 11:55:06 crc kubenswrapper[4772]: I1128 11:55:06.124308 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jjvvx" event={"ID":"c4eac3a7-8d69-4cd6-947c-66b56eff7587","Type":"ContainerStarted","Data":"51583ac78bf545d18769d09508d1e6611d3229c3eaccc1f2f6b6adb3bb71a882"} Nov 28 11:55:07 crc kubenswrapper[4772]: I1128 11:55:07.140702 4772 generic.go:334] "Generic (PLEG): container finished" podID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" containerID="51583ac78bf545d18769d09508d1e6611d3229c3eaccc1f2f6b6adb3bb71a882" exitCode=0 Nov 28 11:55:07 crc kubenswrapper[4772]: I1128 11:55:07.140765 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jjvvx" event={"ID":"c4eac3a7-8d69-4cd6-947c-66b56eff7587","Type":"ContainerDied","Data":"51583ac78bf545d18769d09508d1e6611d3229c3eaccc1f2f6b6adb3bb71a882"} Nov 28 11:55:08 crc kubenswrapper[4772]: I1128 11:55:08.157525 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jjvvx" event={"ID":"c4eac3a7-8d69-4cd6-947c-66b56eff7587","Type":"ContainerStarted","Data":"3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc"} Nov 28 11:55:08 crc kubenswrapper[4772]: I1128 11:55:08.197694 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jjvvx" podStartSLOduration=2.523366237 podStartE2EDuration="5.19767486s" podCreationTimestamp="2025-11-28 11:55:03 +0000 UTC" firstStartedPulling="2025-11-28 11:55:05.106444323 +0000 UTC m=+2903.429687550" lastFinishedPulling="2025-11-28 11:55:07.780752936 +0000 UTC m=+2906.103996173" observedRunningTime="2025-11-28 11:55:08.190316431 +0000 UTC m=+2906.513559708" watchObservedRunningTime="2025-11-28 11:55:08.19767486 +0000 UTC m=+2906.520918097" Nov 28 11:55:14 crc kubenswrapper[4772]: I1128 11:55:14.162409 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:14 crc kubenswrapper[4772]: I1128 11:55:14.165741 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:15 crc kubenswrapper[4772]: I1128 11:55:15.256179 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jjvvx" podUID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" containerName="registry-server" probeResult="failure" output=< Nov 28 11:55:15 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 28 11:55:15 crc kubenswrapper[4772]: > Nov 28 11:55:23 crc kubenswrapper[4772]: I1128 11:55:23.897182 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:55:23 crc kubenswrapper[4772]: I1128 11:55:23.897861 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:55:23 crc kubenswrapper[4772]: I1128 11:55:23.897926 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:55:23 crc kubenswrapper[4772]: I1128 11:55:23.899472 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f528da92aae59ffe428941fe10740aafb66385b3cf3a7e16a99055a6e05cabe"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:55:23 crc kubenswrapper[4772]: I1128 11:55:23.899613 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://4f528da92aae59ffe428941fe10740aafb66385b3cf3a7e16a99055a6e05cabe" gracePeriod=600 Nov 28 11:55:24 crc kubenswrapper[4772]: I1128 11:55:24.247211 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:24 crc kubenswrapper[4772]: I1128 11:55:24.319772 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:24 crc kubenswrapper[4772]: I1128 11:55:24.374939 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="4f528da92aae59ffe428941fe10740aafb66385b3cf3a7e16a99055a6e05cabe" exitCode=0 Nov 28 11:55:24 crc kubenswrapper[4772]: I1128 11:55:24.375001 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"4f528da92aae59ffe428941fe10740aafb66385b3cf3a7e16a99055a6e05cabe"} Nov 28 11:55:24 crc kubenswrapper[4772]: I1128 11:55:24.375065 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1"} Nov 28 11:55:24 crc kubenswrapper[4772]: I1128 11:55:24.375089 4772 scope.go:117] "RemoveContainer" containerID="51dbc2ddce975e68e33eef96cafa8bb0989870a8e455ad4bccfae84f75a7a522" Nov 28 11:55:24 crc kubenswrapper[4772]: I1128 11:55:24.491875 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jjvvx"] Nov 28 11:55:25 crc kubenswrapper[4772]: I1128 11:55:25.391266 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jjvvx" podUID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" containerName="registry-server" containerID="cri-o://3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc" gracePeriod=2 Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.001597 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.058521 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jkpl\" (UniqueName: \"kubernetes.io/projected/c4eac3a7-8d69-4cd6-947c-66b56eff7587-kube-api-access-7jkpl\") pod \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\" (UID: \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\") " Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.058697 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4eac3a7-8d69-4cd6-947c-66b56eff7587-catalog-content\") pod \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\" (UID: \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\") " Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.058768 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4eac3a7-8d69-4cd6-947c-66b56eff7587-utilities\") pod \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\" (UID: \"c4eac3a7-8d69-4cd6-947c-66b56eff7587\") " Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.060425 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4eac3a7-8d69-4cd6-947c-66b56eff7587-utilities" (OuterVolumeSpecName: "utilities") pod "c4eac3a7-8d69-4cd6-947c-66b56eff7587" (UID: "c4eac3a7-8d69-4cd6-947c-66b56eff7587"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.065427 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4eac3a7-8d69-4cd6-947c-66b56eff7587-kube-api-access-7jkpl" (OuterVolumeSpecName: "kube-api-access-7jkpl") pod "c4eac3a7-8d69-4cd6-947c-66b56eff7587" (UID: "c4eac3a7-8d69-4cd6-947c-66b56eff7587"). InnerVolumeSpecName "kube-api-access-7jkpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.160999 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4eac3a7-8d69-4cd6-947c-66b56eff7587-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.161044 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jkpl\" (UniqueName: \"kubernetes.io/projected/c4eac3a7-8d69-4cd6-947c-66b56eff7587-kube-api-access-7jkpl\") on node \"crc\" DevicePath \"\"" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.180236 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4eac3a7-8d69-4cd6-947c-66b56eff7587-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4eac3a7-8d69-4cd6-947c-66b56eff7587" (UID: "c4eac3a7-8d69-4cd6-947c-66b56eff7587"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.263199 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4eac3a7-8d69-4cd6-947c-66b56eff7587-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.406063 4772 generic.go:334] "Generic (PLEG): container finished" podID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" containerID="3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc" exitCode=0 Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.406119 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jjvvx" event={"ID":"c4eac3a7-8d69-4cd6-947c-66b56eff7587","Type":"ContainerDied","Data":"3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc"} Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.406159 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jjvvx" event={"ID":"c4eac3a7-8d69-4cd6-947c-66b56eff7587","Type":"ContainerDied","Data":"ce60c257ad067d87babae82805e7505bd36050ca7afba70f8abf2f3aa5fb80b3"} Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.406157 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jjvvx" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.406181 4772 scope.go:117] "RemoveContainer" containerID="3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.426046 4772 scope.go:117] "RemoveContainer" containerID="51583ac78bf545d18769d09508d1e6611d3229c3eaccc1f2f6b6adb3bb71a882" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.459540 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jjvvx"] Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.469455 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jjvvx"] Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.485295 4772 scope.go:117] "RemoveContainer" containerID="912c341c4a583f50c4e78ed5925687058bd5d75c4f71923ae343cbaa665de0da" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.509681 4772 scope.go:117] "RemoveContainer" containerID="3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc" Nov 28 11:55:26 crc kubenswrapper[4772]: E1128 11:55:26.510115 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc\": container with ID starting with 3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc not found: ID does not exist" containerID="3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.510144 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc"} err="failed to get container status \"3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc\": rpc error: code = NotFound desc = could not find container \"3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc\": container with ID starting with 3cef8e60e23fb87a784c117cd5b2873296af925a75836bf555316db0b79f31cc not found: ID does not exist" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.510164 4772 scope.go:117] "RemoveContainer" containerID="51583ac78bf545d18769d09508d1e6611d3229c3eaccc1f2f6b6adb3bb71a882" Nov 28 11:55:26 crc kubenswrapper[4772]: E1128 11:55:26.510775 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51583ac78bf545d18769d09508d1e6611d3229c3eaccc1f2f6b6adb3bb71a882\": container with ID starting with 51583ac78bf545d18769d09508d1e6611d3229c3eaccc1f2f6b6adb3bb71a882 not found: ID does not exist" containerID="51583ac78bf545d18769d09508d1e6611d3229c3eaccc1f2f6b6adb3bb71a882" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.510820 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51583ac78bf545d18769d09508d1e6611d3229c3eaccc1f2f6b6adb3bb71a882"} err="failed to get container status \"51583ac78bf545d18769d09508d1e6611d3229c3eaccc1f2f6b6adb3bb71a882\": rpc error: code = NotFound desc = could not find container \"51583ac78bf545d18769d09508d1e6611d3229c3eaccc1f2f6b6adb3bb71a882\": container with ID starting with 51583ac78bf545d18769d09508d1e6611d3229c3eaccc1f2f6b6adb3bb71a882 not found: ID does not exist" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.510848 4772 scope.go:117] "RemoveContainer" containerID="912c341c4a583f50c4e78ed5925687058bd5d75c4f71923ae343cbaa665de0da" Nov 28 11:55:26 crc kubenswrapper[4772]: E1128 11:55:26.511254 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"912c341c4a583f50c4e78ed5925687058bd5d75c4f71923ae343cbaa665de0da\": container with ID starting with 912c341c4a583f50c4e78ed5925687058bd5d75c4f71923ae343cbaa665de0da not found: ID does not exist" containerID="912c341c4a583f50c4e78ed5925687058bd5d75c4f71923ae343cbaa665de0da" Nov 28 11:55:26 crc kubenswrapper[4772]: I1128 11:55:26.511275 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"912c341c4a583f50c4e78ed5925687058bd5d75c4f71923ae343cbaa665de0da"} err="failed to get container status \"912c341c4a583f50c4e78ed5925687058bd5d75c4f71923ae343cbaa665de0da\": rpc error: code = NotFound desc = could not find container \"912c341c4a583f50c4e78ed5925687058bd5d75c4f71923ae343cbaa665de0da\": container with ID starting with 912c341c4a583f50c4e78ed5925687058bd5d75c4f71923ae343cbaa665de0da not found: ID does not exist" Nov 28 11:55:28 crc kubenswrapper[4772]: I1128 11:55:28.013400 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" path="/var/lib/kubelet/pods/c4eac3a7-8d69-4cd6-947c-66b56eff7587/volumes" Nov 28 11:57:53 crc kubenswrapper[4772]: I1128 11:57:53.896602 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:57:53 crc kubenswrapper[4772]: I1128 11:57:53.897090 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:58:23 crc kubenswrapper[4772]: I1128 11:58:23.896539 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:58:23 crc kubenswrapper[4772]: I1128 11:58:23.896963 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:58:53 crc kubenswrapper[4772]: I1128 11:58:53.896911 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:58:53 crc kubenswrapper[4772]: I1128 11:58:53.897596 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:58:53 crc kubenswrapper[4772]: I1128 11:58:53.897663 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 11:58:53 crc kubenswrapper[4772]: I1128 11:58:53.898718 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:58:53 crc kubenswrapper[4772]: I1128 11:58:53.898805 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" gracePeriod=600 Nov 28 11:58:54 crc kubenswrapper[4772]: E1128 11:58:54.064624 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:58:54 crc kubenswrapper[4772]: I1128 11:58:54.793650 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" exitCode=0 Nov 28 11:58:54 crc kubenswrapper[4772]: I1128 11:58:54.793753 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1"} Nov 28 11:58:54 crc kubenswrapper[4772]: I1128 11:58:54.793793 4772 scope.go:117] "RemoveContainer" containerID="4f528da92aae59ffe428941fe10740aafb66385b3cf3a7e16a99055a6e05cabe" Nov 28 11:58:54 crc kubenswrapper[4772]: I1128 11:58:54.795868 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 11:58:54 crc kubenswrapper[4772]: E1128 11:58:54.796381 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:59:07 crc kubenswrapper[4772]: I1128 11:59:07.994737 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 11:59:07 crc kubenswrapper[4772]: E1128 11:59:07.996677 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:59:18 crc kubenswrapper[4772]: I1128 11:59:18.994969 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 11:59:18 crc kubenswrapper[4772]: E1128 11:59:18.995993 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:59:29 crc kubenswrapper[4772]: I1128 11:59:29.995321 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 11:59:29 crc kubenswrapper[4772]: E1128 11:59:29.996267 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:59:40 crc kubenswrapper[4772]: I1128 11:59:40.995241 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 11:59:40 crc kubenswrapper[4772]: E1128 11:59:40.996464 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 11:59:54 crc kubenswrapper[4772]: I1128 11:59:54.994070 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 11:59:54 crc kubenswrapper[4772]: E1128 11:59:54.995006 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.160641 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9"] Nov 28 12:00:00 crc kubenswrapper[4772]: E1128 12:00:00.161651 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" containerName="extract-content" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.161669 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" containerName="extract-content" Nov 28 12:00:00 crc kubenswrapper[4772]: E1128 12:00:00.161685 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" containerName="extract-utilities" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.161693 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" containerName="extract-utilities" Nov 28 12:00:00 crc kubenswrapper[4772]: E1128 12:00:00.161711 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" containerName="registry-server" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.161719 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" containerName="registry-server" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.162019 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4eac3a7-8d69-4cd6-947c-66b56eff7587" containerName="registry-server" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.175327 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9"] Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.175468 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.177923 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.179156 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.327995 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed2e9ca5-2460-484d-85b1-56825d01acfd-secret-volume\") pod \"collect-profiles-29405520-cnhk9\" (UID: \"ed2e9ca5-2460-484d-85b1-56825d01acfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.328053 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w94l9\" (UniqueName: \"kubernetes.io/projected/ed2e9ca5-2460-484d-85b1-56825d01acfd-kube-api-access-w94l9\") pod \"collect-profiles-29405520-cnhk9\" (UID: \"ed2e9ca5-2460-484d-85b1-56825d01acfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.328131 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed2e9ca5-2460-484d-85b1-56825d01acfd-config-volume\") pod \"collect-profiles-29405520-cnhk9\" (UID: \"ed2e9ca5-2460-484d-85b1-56825d01acfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.429723 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed2e9ca5-2460-484d-85b1-56825d01acfd-config-volume\") pod \"collect-profiles-29405520-cnhk9\" (UID: \"ed2e9ca5-2460-484d-85b1-56825d01acfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.429943 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed2e9ca5-2460-484d-85b1-56825d01acfd-secret-volume\") pod \"collect-profiles-29405520-cnhk9\" (UID: \"ed2e9ca5-2460-484d-85b1-56825d01acfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.430018 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w94l9\" (UniqueName: \"kubernetes.io/projected/ed2e9ca5-2460-484d-85b1-56825d01acfd-kube-api-access-w94l9\") pod \"collect-profiles-29405520-cnhk9\" (UID: \"ed2e9ca5-2460-484d-85b1-56825d01acfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.431314 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed2e9ca5-2460-484d-85b1-56825d01acfd-config-volume\") pod \"collect-profiles-29405520-cnhk9\" (UID: \"ed2e9ca5-2460-484d-85b1-56825d01acfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.442880 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed2e9ca5-2460-484d-85b1-56825d01acfd-secret-volume\") pod \"collect-profiles-29405520-cnhk9\" (UID: \"ed2e9ca5-2460-484d-85b1-56825d01acfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.450599 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w94l9\" (UniqueName: \"kubernetes.io/projected/ed2e9ca5-2460-484d-85b1-56825d01acfd-kube-api-access-w94l9\") pod \"collect-profiles-29405520-cnhk9\" (UID: \"ed2e9ca5-2460-484d-85b1-56825d01acfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:00 crc kubenswrapper[4772]: I1128 12:00:00.515521 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:01 crc kubenswrapper[4772]: I1128 12:00:01.026703 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9"] Nov 28 12:00:01 crc kubenswrapper[4772]: I1128 12:00:01.515487 4772 generic.go:334] "Generic (PLEG): container finished" podID="ed2e9ca5-2460-484d-85b1-56825d01acfd" containerID="970a3c28cdbeceff9fd7d1cf08b1c7390a7d77a7e43b94105ed820c7ec9cb18a" exitCode=0 Nov 28 12:00:01 crc kubenswrapper[4772]: I1128 12:00:01.515577 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" event={"ID":"ed2e9ca5-2460-484d-85b1-56825d01acfd","Type":"ContainerDied","Data":"970a3c28cdbeceff9fd7d1cf08b1c7390a7d77a7e43b94105ed820c7ec9cb18a"} Nov 28 12:00:01 crc kubenswrapper[4772]: I1128 12:00:01.515767 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" event={"ID":"ed2e9ca5-2460-484d-85b1-56825d01acfd","Type":"ContainerStarted","Data":"82c16b75224139d8126fa67293336cadd5c14766c46fd9648e07ab77919b8b04"} Nov 28 12:00:02 crc kubenswrapper[4772]: I1128 12:00:02.885519 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:02 crc kubenswrapper[4772]: I1128 12:00:02.981958 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94l9\" (UniqueName: \"kubernetes.io/projected/ed2e9ca5-2460-484d-85b1-56825d01acfd-kube-api-access-w94l9\") pod \"ed2e9ca5-2460-484d-85b1-56825d01acfd\" (UID: \"ed2e9ca5-2460-484d-85b1-56825d01acfd\") " Nov 28 12:00:02 crc kubenswrapper[4772]: I1128 12:00:02.982022 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed2e9ca5-2460-484d-85b1-56825d01acfd-secret-volume\") pod \"ed2e9ca5-2460-484d-85b1-56825d01acfd\" (UID: \"ed2e9ca5-2460-484d-85b1-56825d01acfd\") " Nov 28 12:00:02 crc kubenswrapper[4772]: I1128 12:00:02.982245 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed2e9ca5-2460-484d-85b1-56825d01acfd-config-volume\") pod \"ed2e9ca5-2460-484d-85b1-56825d01acfd\" (UID: \"ed2e9ca5-2460-484d-85b1-56825d01acfd\") " Nov 28 12:00:02 crc kubenswrapper[4772]: I1128 12:00:02.982854 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed2e9ca5-2460-484d-85b1-56825d01acfd-config-volume" (OuterVolumeSpecName: "config-volume") pod "ed2e9ca5-2460-484d-85b1-56825d01acfd" (UID: "ed2e9ca5-2460-484d-85b1-56825d01acfd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:00:02 crc kubenswrapper[4772]: I1128 12:00:02.988054 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed2e9ca5-2460-484d-85b1-56825d01acfd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ed2e9ca5-2460-484d-85b1-56825d01acfd" (UID: "ed2e9ca5-2460-484d-85b1-56825d01acfd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:00:02 crc kubenswrapper[4772]: I1128 12:00:02.988174 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed2e9ca5-2460-484d-85b1-56825d01acfd-kube-api-access-w94l9" (OuterVolumeSpecName: "kube-api-access-w94l9") pod "ed2e9ca5-2460-484d-85b1-56825d01acfd" (UID: "ed2e9ca5-2460-484d-85b1-56825d01acfd"). InnerVolumeSpecName "kube-api-access-w94l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:00:03 crc kubenswrapper[4772]: I1128 12:00:03.084646 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed2e9ca5-2460-484d-85b1-56825d01acfd-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:00:03 crc kubenswrapper[4772]: I1128 12:00:03.084683 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w94l9\" (UniqueName: \"kubernetes.io/projected/ed2e9ca5-2460-484d-85b1-56825d01acfd-kube-api-access-w94l9\") on node \"crc\" DevicePath \"\"" Nov 28 12:00:03 crc kubenswrapper[4772]: I1128 12:00:03.084695 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed2e9ca5-2460-484d-85b1-56825d01acfd-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:00:03 crc kubenswrapper[4772]: I1128 12:00:03.538952 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" event={"ID":"ed2e9ca5-2460-484d-85b1-56825d01acfd","Type":"ContainerDied","Data":"82c16b75224139d8126fa67293336cadd5c14766c46fd9648e07ab77919b8b04"} Nov 28 12:00:03 crc kubenswrapper[4772]: I1128 12:00:03.539002 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82c16b75224139d8126fa67293336cadd5c14766c46fd9648e07ab77919b8b04" Nov 28 12:00:03 crc kubenswrapper[4772]: I1128 12:00:03.539007 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-cnhk9" Nov 28 12:00:03 crc kubenswrapper[4772]: I1128 12:00:03.978580 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf"] Nov 28 12:00:03 crc kubenswrapper[4772]: I1128 12:00:03.991767 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405475-4lxmf"] Nov 28 12:00:04 crc kubenswrapper[4772]: I1128 12:00:04.008229 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c" path="/var/lib/kubelet/pods/b2cc00ea-5e83-427b-a9f6-8cd30e1b3c1c/volumes" Nov 28 12:00:09 crc kubenswrapper[4772]: I1128 12:00:09.995257 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:00:09 crc kubenswrapper[4772]: E1128 12:00:09.995936 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:00:22 crc kubenswrapper[4772]: I1128 12:00:22.995249 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:00:22 crc kubenswrapper[4772]: E1128 12:00:22.996298 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:00:34 crc kubenswrapper[4772]: I1128 12:00:34.995630 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:00:34 crc kubenswrapper[4772]: E1128 12:00:34.996587 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:00:45 crc kubenswrapper[4772]: I1128 12:00:45.994904 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:00:45 crc kubenswrapper[4772]: E1128 12:00:45.995649 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.113056 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kzbqr"] Nov 28 12:00:47 crc kubenswrapper[4772]: E1128 12:00:47.114558 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2e9ca5-2460-484d-85b1-56825d01acfd" containerName="collect-profiles" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.114583 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2e9ca5-2460-484d-85b1-56825d01acfd" containerName="collect-profiles" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.114798 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2e9ca5-2460-484d-85b1-56825d01acfd" containerName="collect-profiles" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.116194 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.128668 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzbqr"] Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.189678 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7l8b\" (UniqueName: \"kubernetes.io/projected/c3a68119-6eea-4860-be17-249edae83008-kube-api-access-c7l8b\") pod \"redhat-marketplace-kzbqr\" (UID: \"c3a68119-6eea-4860-be17-249edae83008\") " pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.189752 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a68119-6eea-4860-be17-249edae83008-catalog-content\") pod \"redhat-marketplace-kzbqr\" (UID: \"c3a68119-6eea-4860-be17-249edae83008\") " pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.190108 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a68119-6eea-4860-be17-249edae83008-utilities\") pod \"redhat-marketplace-kzbqr\" (UID: \"c3a68119-6eea-4860-be17-249edae83008\") " pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.291930 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7l8b\" (UniqueName: \"kubernetes.io/projected/c3a68119-6eea-4860-be17-249edae83008-kube-api-access-c7l8b\") pod \"redhat-marketplace-kzbqr\" (UID: \"c3a68119-6eea-4860-be17-249edae83008\") " pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.292069 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a68119-6eea-4860-be17-249edae83008-catalog-content\") pod \"redhat-marketplace-kzbqr\" (UID: \"c3a68119-6eea-4860-be17-249edae83008\") " pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.292214 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a68119-6eea-4860-be17-249edae83008-utilities\") pod \"redhat-marketplace-kzbqr\" (UID: \"c3a68119-6eea-4860-be17-249edae83008\") " pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.292773 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a68119-6eea-4860-be17-249edae83008-utilities\") pod \"redhat-marketplace-kzbqr\" (UID: \"c3a68119-6eea-4860-be17-249edae83008\") " pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.293027 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a68119-6eea-4860-be17-249edae83008-catalog-content\") pod \"redhat-marketplace-kzbqr\" (UID: \"c3a68119-6eea-4860-be17-249edae83008\") " pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.313542 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7l8b\" (UniqueName: \"kubernetes.io/projected/c3a68119-6eea-4860-be17-249edae83008-kube-api-access-c7l8b\") pod \"redhat-marketplace-kzbqr\" (UID: \"c3a68119-6eea-4860-be17-249edae83008\") " pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.454459 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:47 crc kubenswrapper[4772]: I1128 12:00:47.468496 4772 scope.go:117] "RemoveContainer" containerID="d2762c6278fe8cc95f7fd3e48327ab0d03aed28e82b53cfeea687f8d1ba35b62" Nov 28 12:00:48 crc kubenswrapper[4772]: I1128 12:00:48.012871 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzbqr"] Nov 28 12:00:48 crc kubenswrapper[4772]: I1128 12:00:48.092719 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzbqr" event={"ID":"c3a68119-6eea-4860-be17-249edae83008","Type":"ContainerStarted","Data":"3a236c25aadff0dc01f474d1e319a493a08bdee394e5d2ab63891064dc351791"} Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.108051 4772 generic.go:334] "Generic (PLEG): container finished" podID="c3a68119-6eea-4860-be17-249edae83008" containerID="cc64f95a857e81208daa68dde32c9283b63808873c4a2ef48cbd8d5f48fd0148" exitCode=0 Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.108117 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzbqr" event={"ID":"c3a68119-6eea-4860-be17-249edae83008","Type":"ContainerDied","Data":"cc64f95a857e81208daa68dde32c9283b63808873c4a2ef48cbd8d5f48fd0148"} Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.112283 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.516244 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hchzj"] Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.523314 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.553637 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3079664b-2930-4138-91ee-0bb3b5d6c1fd-utilities\") pod \"community-operators-hchzj\" (UID: \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\") " pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.553744 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhkhg\" (UniqueName: \"kubernetes.io/projected/3079664b-2930-4138-91ee-0bb3b5d6c1fd-kube-api-access-xhkhg\") pod \"community-operators-hchzj\" (UID: \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\") " pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.553975 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3079664b-2930-4138-91ee-0bb3b5d6c1fd-catalog-content\") pod \"community-operators-hchzj\" (UID: \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\") " pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.557998 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hchzj"] Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.656001 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3079664b-2930-4138-91ee-0bb3b5d6c1fd-utilities\") pod \"community-operators-hchzj\" (UID: \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\") " pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.656083 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhkhg\" (UniqueName: \"kubernetes.io/projected/3079664b-2930-4138-91ee-0bb3b5d6c1fd-kube-api-access-xhkhg\") pod \"community-operators-hchzj\" (UID: \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\") " pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.656244 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3079664b-2930-4138-91ee-0bb3b5d6c1fd-catalog-content\") pod \"community-operators-hchzj\" (UID: \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\") " pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.656568 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3079664b-2930-4138-91ee-0bb3b5d6c1fd-utilities\") pod \"community-operators-hchzj\" (UID: \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\") " pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.656906 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3079664b-2930-4138-91ee-0bb3b5d6c1fd-catalog-content\") pod \"community-operators-hchzj\" (UID: \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\") " pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.700808 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhkhg\" (UniqueName: \"kubernetes.io/projected/3079664b-2930-4138-91ee-0bb3b5d6c1fd-kube-api-access-xhkhg\") pod \"community-operators-hchzj\" (UID: \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\") " pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:49 crc kubenswrapper[4772]: I1128 12:00:49.869499 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:50 crc kubenswrapper[4772]: W1128 12:00:50.242602 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3079664b_2930_4138_91ee_0bb3b5d6c1fd.slice/crio-a9c54b56f189ae6310f13cef7deedba0fec72551956d7d441aecac5f93f6f2b6 WatchSource:0}: Error finding container a9c54b56f189ae6310f13cef7deedba0fec72551956d7d441aecac5f93f6f2b6: Status 404 returned error can't find the container with id a9c54b56f189ae6310f13cef7deedba0fec72551956d7d441aecac5f93f6f2b6 Nov 28 12:00:50 crc kubenswrapper[4772]: I1128 12:00:50.249907 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hchzj"] Nov 28 12:00:51 crc kubenswrapper[4772]: I1128 12:00:51.134265 4772 generic.go:334] "Generic (PLEG): container finished" podID="c3a68119-6eea-4860-be17-249edae83008" containerID="58b358e7c25f3a32045c3a2cef89689f5d566fa0cb35c998071f41508786f248" exitCode=0 Nov 28 12:00:51 crc kubenswrapper[4772]: I1128 12:00:51.134613 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzbqr" event={"ID":"c3a68119-6eea-4860-be17-249edae83008","Type":"ContainerDied","Data":"58b358e7c25f3a32045c3a2cef89689f5d566fa0cb35c998071f41508786f248"} Nov 28 12:00:51 crc kubenswrapper[4772]: I1128 12:00:51.138995 4772 generic.go:334] "Generic (PLEG): container finished" podID="3079664b-2930-4138-91ee-0bb3b5d6c1fd" containerID="d79204913f61cb7559b40ae1a227f93072da548be41b201cdb99c3b7e98f478a" exitCode=0 Nov 28 12:00:51 crc kubenswrapper[4772]: I1128 12:00:51.139048 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hchzj" event={"ID":"3079664b-2930-4138-91ee-0bb3b5d6c1fd","Type":"ContainerDied","Data":"d79204913f61cb7559b40ae1a227f93072da548be41b201cdb99c3b7e98f478a"} Nov 28 12:00:51 crc kubenswrapper[4772]: I1128 12:00:51.139086 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hchzj" event={"ID":"3079664b-2930-4138-91ee-0bb3b5d6c1fd","Type":"ContainerStarted","Data":"a9c54b56f189ae6310f13cef7deedba0fec72551956d7d441aecac5f93f6f2b6"} Nov 28 12:00:52 crc kubenswrapper[4772]: I1128 12:00:52.150178 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzbqr" event={"ID":"c3a68119-6eea-4860-be17-249edae83008","Type":"ContainerStarted","Data":"7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c"} Nov 28 12:00:52 crc kubenswrapper[4772]: I1128 12:00:52.172851 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kzbqr" podStartSLOduration=2.502076868 podStartE2EDuration="5.172833241s" podCreationTimestamp="2025-11-28 12:00:47 +0000 UTC" firstStartedPulling="2025-11-28 12:00:49.111747099 +0000 UTC m=+3247.434990366" lastFinishedPulling="2025-11-28 12:00:51.782503492 +0000 UTC m=+3250.105746739" observedRunningTime="2025-11-28 12:00:52.165334829 +0000 UTC m=+3250.488578056" watchObservedRunningTime="2025-11-28 12:00:52.172833241 +0000 UTC m=+3250.496076468" Nov 28 12:00:53 crc kubenswrapper[4772]: I1128 12:00:53.160533 4772 generic.go:334] "Generic (PLEG): container finished" podID="3079664b-2930-4138-91ee-0bb3b5d6c1fd" containerID="153877ae7a37c89f495533f9205bb5e7b41f65a592f30c1082d1e2e49ed30330" exitCode=0 Nov 28 12:00:53 crc kubenswrapper[4772]: I1128 12:00:53.160642 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hchzj" event={"ID":"3079664b-2930-4138-91ee-0bb3b5d6c1fd","Type":"ContainerDied","Data":"153877ae7a37c89f495533f9205bb5e7b41f65a592f30c1082d1e2e49ed30330"} Nov 28 12:00:54 crc kubenswrapper[4772]: I1128 12:00:54.176601 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hchzj" event={"ID":"3079664b-2930-4138-91ee-0bb3b5d6c1fd","Type":"ContainerStarted","Data":"e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523"} Nov 28 12:00:54 crc kubenswrapper[4772]: I1128 12:00:54.199062 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hchzj" podStartSLOduration=2.570995894 podStartE2EDuration="5.199043939s" podCreationTimestamp="2025-11-28 12:00:49 +0000 UTC" firstStartedPulling="2025-11-28 12:00:51.140507965 +0000 UTC m=+3249.463751222" lastFinishedPulling="2025-11-28 12:00:53.76855604 +0000 UTC m=+3252.091799267" observedRunningTime="2025-11-28 12:00:54.194963719 +0000 UTC m=+3252.518206956" watchObservedRunningTime="2025-11-28 12:00:54.199043939 +0000 UTC m=+3252.522287166" Nov 28 12:00:57 crc kubenswrapper[4772]: I1128 12:00:57.455311 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:57 crc kubenswrapper[4772]: I1128 12:00:57.457493 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:57 crc kubenswrapper[4772]: I1128 12:00:57.536420 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:58 crc kubenswrapper[4772]: I1128 12:00:58.293709 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:00:58 crc kubenswrapper[4772]: I1128 12:00:58.359715 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzbqr"] Nov 28 12:00:59 crc kubenswrapper[4772]: I1128 12:00:59.870175 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:59 crc kubenswrapper[4772]: I1128 12:00:59.870622 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:59 crc kubenswrapper[4772]: I1128 12:00:59.989400 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:00:59 crc kubenswrapper[4772]: I1128 12:00:59.994908 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:00:59 crc kubenswrapper[4772]: E1128 12:00:59.995154 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.154985 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29405521-g4n7r"] Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.156513 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.163744 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-config-data\") pod \"keystone-cron-29405521-g4n7r\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.163813 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-combined-ca-bundle\") pod \"keystone-cron-29405521-g4n7r\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.163861 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-fernet-keys\") pod \"keystone-cron-29405521-g4n7r\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.163933 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ttsl\" (UniqueName: \"kubernetes.io/projected/058bf0b5-6899-4a5d-a098-e40b52cfd512-kube-api-access-2ttsl\") pod \"keystone-cron-29405521-g4n7r\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.217783 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405521-g4n7r"] Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.236216 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kzbqr" podUID="c3a68119-6eea-4860-be17-249edae83008" containerName="registry-server" containerID="cri-o://7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c" gracePeriod=2 Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.268266 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ttsl\" (UniqueName: \"kubernetes.io/projected/058bf0b5-6899-4a5d-a098-e40b52cfd512-kube-api-access-2ttsl\") pod \"keystone-cron-29405521-g4n7r\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.268434 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-config-data\") pod \"keystone-cron-29405521-g4n7r\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.268504 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-combined-ca-bundle\") pod \"keystone-cron-29405521-g4n7r\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.268553 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-fernet-keys\") pod \"keystone-cron-29405521-g4n7r\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.278247 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-combined-ca-bundle\") pod \"keystone-cron-29405521-g4n7r\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.280052 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-config-data\") pod \"keystone-cron-29405521-g4n7r\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.284832 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-fernet-keys\") pod \"keystone-cron-29405521-g4n7r\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.292290 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ttsl\" (UniqueName: \"kubernetes.io/projected/058bf0b5-6899-4a5d-a098-e40b52cfd512-kube-api-access-2ttsl\") pod \"keystone-cron-29405521-g4n7r\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.306064 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.522980 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.700397 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hchzj"] Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.701118 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.878064 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7l8b\" (UniqueName: \"kubernetes.io/projected/c3a68119-6eea-4860-be17-249edae83008-kube-api-access-c7l8b\") pod \"c3a68119-6eea-4860-be17-249edae83008\" (UID: \"c3a68119-6eea-4860-be17-249edae83008\") " Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.878119 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a68119-6eea-4860-be17-249edae83008-catalog-content\") pod \"c3a68119-6eea-4860-be17-249edae83008\" (UID: \"c3a68119-6eea-4860-be17-249edae83008\") " Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.878182 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a68119-6eea-4860-be17-249edae83008-utilities\") pod \"c3a68119-6eea-4860-be17-249edae83008\" (UID: \"c3a68119-6eea-4860-be17-249edae83008\") " Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.879233 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a68119-6eea-4860-be17-249edae83008-utilities" (OuterVolumeSpecName: "utilities") pod "c3a68119-6eea-4860-be17-249edae83008" (UID: "c3a68119-6eea-4860-be17-249edae83008"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.884470 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a68119-6eea-4860-be17-249edae83008-kube-api-access-c7l8b" (OuterVolumeSpecName: "kube-api-access-c7l8b") pod "c3a68119-6eea-4860-be17-249edae83008" (UID: "c3a68119-6eea-4860-be17-249edae83008"). InnerVolumeSpecName "kube-api-access-c7l8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.896456 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a68119-6eea-4860-be17-249edae83008-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3a68119-6eea-4860-be17-249edae83008" (UID: "c3a68119-6eea-4860-be17-249edae83008"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.980823 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7l8b\" (UniqueName: \"kubernetes.io/projected/c3a68119-6eea-4860-be17-249edae83008-kube-api-access-c7l8b\") on node \"crc\" DevicePath \"\"" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.981815 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a68119-6eea-4860-be17-249edae83008-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:01:00 crc kubenswrapper[4772]: I1128 12:01:00.981832 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a68119-6eea-4860-be17-249edae83008-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.030691 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405521-g4n7r"] Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.248414 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405521-g4n7r" event={"ID":"058bf0b5-6899-4a5d-a098-e40b52cfd512","Type":"ContainerStarted","Data":"079f74f4fff8affa54c1cdb465b75f838072c17412c51c79a5bda748ff2e90a8"} Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.252630 4772 generic.go:334] "Generic (PLEG): container finished" podID="c3a68119-6eea-4860-be17-249edae83008" containerID="7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c" exitCode=0 Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.253454 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzbqr" Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.256550 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzbqr" event={"ID":"c3a68119-6eea-4860-be17-249edae83008","Type":"ContainerDied","Data":"7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c"} Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.256613 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzbqr" event={"ID":"c3a68119-6eea-4860-be17-249edae83008","Type":"ContainerDied","Data":"3a236c25aadff0dc01f474d1e319a493a08bdee394e5d2ab63891064dc351791"} Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.256642 4772 scope.go:117] "RemoveContainer" containerID="7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c" Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.293287 4772 scope.go:117] "RemoveContainer" containerID="58b358e7c25f3a32045c3a2cef89689f5d566fa0cb35c998071f41508786f248" Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.326964 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzbqr"] Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.328314 4772 scope.go:117] "RemoveContainer" containerID="cc64f95a857e81208daa68dde32c9283b63808873c4a2ef48cbd8d5f48fd0148" Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.334062 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzbqr"] Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.379531 4772 scope.go:117] "RemoveContainer" containerID="7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c" Nov 28 12:01:01 crc kubenswrapper[4772]: E1128 12:01:01.380094 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c\": container with ID starting with 7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c not found: ID does not exist" containerID="7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c" Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.380127 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c"} err="failed to get container status \"7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c\": rpc error: code = NotFound desc = could not find container \"7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c\": container with ID starting with 7933d26a14fe253e047fe1fdebf559d4840c8294c75fd7e9b4f7835b1cebdf9c not found: ID does not exist" Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.380165 4772 scope.go:117] "RemoveContainer" containerID="58b358e7c25f3a32045c3a2cef89689f5d566fa0cb35c998071f41508786f248" Nov 28 12:01:01 crc kubenswrapper[4772]: E1128 12:01:01.380586 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b358e7c25f3a32045c3a2cef89689f5d566fa0cb35c998071f41508786f248\": container with ID starting with 58b358e7c25f3a32045c3a2cef89689f5d566fa0cb35c998071f41508786f248 not found: ID does not exist" containerID="58b358e7c25f3a32045c3a2cef89689f5d566fa0cb35c998071f41508786f248" Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.380615 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b358e7c25f3a32045c3a2cef89689f5d566fa0cb35c998071f41508786f248"} err="failed to get container status \"58b358e7c25f3a32045c3a2cef89689f5d566fa0cb35c998071f41508786f248\": rpc error: code = NotFound desc = could not find container \"58b358e7c25f3a32045c3a2cef89689f5d566fa0cb35c998071f41508786f248\": container with ID starting with 58b358e7c25f3a32045c3a2cef89689f5d566fa0cb35c998071f41508786f248 not found: ID does not exist" Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.380629 4772 scope.go:117] "RemoveContainer" containerID="cc64f95a857e81208daa68dde32c9283b63808873c4a2ef48cbd8d5f48fd0148" Nov 28 12:01:01 crc kubenswrapper[4772]: E1128 12:01:01.381004 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc64f95a857e81208daa68dde32c9283b63808873c4a2ef48cbd8d5f48fd0148\": container with ID starting with cc64f95a857e81208daa68dde32c9283b63808873c4a2ef48cbd8d5f48fd0148 not found: ID does not exist" containerID="cc64f95a857e81208daa68dde32c9283b63808873c4a2ef48cbd8d5f48fd0148" Nov 28 12:01:01 crc kubenswrapper[4772]: I1128 12:01:01.381040 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc64f95a857e81208daa68dde32c9283b63808873c4a2ef48cbd8d5f48fd0148"} err="failed to get container status \"cc64f95a857e81208daa68dde32c9283b63808873c4a2ef48cbd8d5f48fd0148\": rpc error: code = NotFound desc = could not find container \"cc64f95a857e81208daa68dde32c9283b63808873c4a2ef48cbd8d5f48fd0148\": container with ID starting with cc64f95a857e81208daa68dde32c9283b63808873c4a2ef48cbd8d5f48fd0148 not found: ID does not exist" Nov 28 12:01:02 crc kubenswrapper[4772]: I1128 12:01:02.021428 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a68119-6eea-4860-be17-249edae83008" path="/var/lib/kubelet/pods/c3a68119-6eea-4860-be17-249edae83008/volumes" Nov 28 12:01:02 crc kubenswrapper[4772]: I1128 12:01:02.263460 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405521-g4n7r" event={"ID":"058bf0b5-6899-4a5d-a098-e40b52cfd512","Type":"ContainerStarted","Data":"10eeca33a8d90723c359b71b3cbfe248270fa93d4d409ff421c12bfb97eb7f25"} Nov 28 12:01:02 crc kubenswrapper[4772]: I1128 12:01:02.266155 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hchzj" podUID="3079664b-2930-4138-91ee-0bb3b5d6c1fd" containerName="registry-server" containerID="cri-o://e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523" gracePeriod=2 Nov 28 12:01:02 crc kubenswrapper[4772]: I1128 12:01:02.302334 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29405521-g4n7r" podStartSLOduration=2.302310499 podStartE2EDuration="2.302310499s" podCreationTimestamp="2025-11-28 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:01:02.279246949 +0000 UTC m=+3260.602490186" watchObservedRunningTime="2025-11-28 12:01:02.302310499 +0000 UTC m=+3260.625553736" Nov 28 12:01:02 crc kubenswrapper[4772]: I1128 12:01:02.765423 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:01:02 crc kubenswrapper[4772]: I1128 12:01:02.915758 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3079664b-2930-4138-91ee-0bb3b5d6c1fd-catalog-content\") pod \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\" (UID: \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\") " Nov 28 12:01:02 crc kubenswrapper[4772]: I1128 12:01:02.915933 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3079664b-2930-4138-91ee-0bb3b5d6c1fd-utilities\") pod \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\" (UID: \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\") " Nov 28 12:01:02 crc kubenswrapper[4772]: I1128 12:01:02.916286 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhkhg\" (UniqueName: \"kubernetes.io/projected/3079664b-2930-4138-91ee-0bb3b5d6c1fd-kube-api-access-xhkhg\") pod \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\" (UID: \"3079664b-2930-4138-91ee-0bb3b5d6c1fd\") " Nov 28 12:01:02 crc kubenswrapper[4772]: I1128 12:01:02.917192 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3079664b-2930-4138-91ee-0bb3b5d6c1fd-utilities" (OuterVolumeSpecName: "utilities") pod "3079664b-2930-4138-91ee-0bb3b5d6c1fd" (UID: "3079664b-2930-4138-91ee-0bb3b5d6c1fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:01:02 crc kubenswrapper[4772]: I1128 12:01:02.929601 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3079664b-2930-4138-91ee-0bb3b5d6c1fd-kube-api-access-xhkhg" (OuterVolumeSpecName: "kube-api-access-xhkhg") pod "3079664b-2930-4138-91ee-0bb3b5d6c1fd" (UID: "3079664b-2930-4138-91ee-0bb3b5d6c1fd"). InnerVolumeSpecName "kube-api-access-xhkhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:01:02 crc kubenswrapper[4772]: I1128 12:01:02.967127 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3079664b-2930-4138-91ee-0bb3b5d6c1fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3079664b-2930-4138-91ee-0bb3b5d6c1fd" (UID: "3079664b-2930-4138-91ee-0bb3b5d6c1fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.018158 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhkhg\" (UniqueName: \"kubernetes.io/projected/3079664b-2930-4138-91ee-0bb3b5d6c1fd-kube-api-access-xhkhg\") on node \"crc\" DevicePath \"\"" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.018186 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3079664b-2930-4138-91ee-0bb3b5d6c1fd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.018199 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3079664b-2930-4138-91ee-0bb3b5d6c1fd-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.280326 4772 generic.go:334] "Generic (PLEG): container finished" podID="3079664b-2930-4138-91ee-0bb3b5d6c1fd" containerID="e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523" exitCode=0 Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.280441 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hchzj" event={"ID":"3079664b-2930-4138-91ee-0bb3b5d6c1fd","Type":"ContainerDied","Data":"e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523"} Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.280525 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hchzj" event={"ID":"3079664b-2930-4138-91ee-0bb3b5d6c1fd","Type":"ContainerDied","Data":"a9c54b56f189ae6310f13cef7deedba0fec72551956d7d441aecac5f93f6f2b6"} Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.280395 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hchzj" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.280561 4772 scope.go:117] "RemoveContainer" containerID="e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.282985 4772 generic.go:334] "Generic (PLEG): container finished" podID="058bf0b5-6899-4a5d-a098-e40b52cfd512" containerID="10eeca33a8d90723c359b71b3cbfe248270fa93d4d409ff421c12bfb97eb7f25" exitCode=0 Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.283038 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405521-g4n7r" event={"ID":"058bf0b5-6899-4a5d-a098-e40b52cfd512","Type":"ContainerDied","Data":"10eeca33a8d90723c359b71b3cbfe248270fa93d4d409ff421c12bfb97eb7f25"} Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.306829 4772 scope.go:117] "RemoveContainer" containerID="153877ae7a37c89f495533f9205bb5e7b41f65a592f30c1082d1e2e49ed30330" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.342330 4772 scope.go:117] "RemoveContainer" containerID="d79204913f61cb7559b40ae1a227f93072da548be41b201cdb99c3b7e98f478a" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.345008 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hchzj"] Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.357047 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hchzj"] Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.377517 4772 scope.go:117] "RemoveContainer" containerID="e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523" Nov 28 12:01:03 crc kubenswrapper[4772]: E1128 12:01:03.378023 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523\": container with ID starting with e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523 not found: ID does not exist" containerID="e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.378074 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523"} err="failed to get container status \"e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523\": rpc error: code = NotFound desc = could not find container \"e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523\": container with ID starting with e30c2b179085cec38613507227fd72896c9acc350c1a29560b4cc122a7194523 not found: ID does not exist" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.378100 4772 scope.go:117] "RemoveContainer" containerID="153877ae7a37c89f495533f9205bb5e7b41f65a592f30c1082d1e2e49ed30330" Nov 28 12:01:03 crc kubenswrapper[4772]: E1128 12:01:03.378526 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"153877ae7a37c89f495533f9205bb5e7b41f65a592f30c1082d1e2e49ed30330\": container with ID starting with 153877ae7a37c89f495533f9205bb5e7b41f65a592f30c1082d1e2e49ed30330 not found: ID does not exist" containerID="153877ae7a37c89f495533f9205bb5e7b41f65a592f30c1082d1e2e49ed30330" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.378552 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"153877ae7a37c89f495533f9205bb5e7b41f65a592f30c1082d1e2e49ed30330"} err="failed to get container status \"153877ae7a37c89f495533f9205bb5e7b41f65a592f30c1082d1e2e49ed30330\": rpc error: code = NotFound desc = could not find container \"153877ae7a37c89f495533f9205bb5e7b41f65a592f30c1082d1e2e49ed30330\": container with ID starting with 153877ae7a37c89f495533f9205bb5e7b41f65a592f30c1082d1e2e49ed30330 not found: ID does not exist" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.378568 4772 scope.go:117] "RemoveContainer" containerID="d79204913f61cb7559b40ae1a227f93072da548be41b201cdb99c3b7e98f478a" Nov 28 12:01:03 crc kubenswrapper[4772]: E1128 12:01:03.378788 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79204913f61cb7559b40ae1a227f93072da548be41b201cdb99c3b7e98f478a\": container with ID starting with d79204913f61cb7559b40ae1a227f93072da548be41b201cdb99c3b7e98f478a not found: ID does not exist" containerID="d79204913f61cb7559b40ae1a227f93072da548be41b201cdb99c3b7e98f478a" Nov 28 12:01:03 crc kubenswrapper[4772]: I1128 12:01:03.378809 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79204913f61cb7559b40ae1a227f93072da548be41b201cdb99c3b7e98f478a"} err="failed to get container status \"d79204913f61cb7559b40ae1a227f93072da548be41b201cdb99c3b7e98f478a\": rpc error: code = NotFound desc = could not find container \"d79204913f61cb7559b40ae1a227f93072da548be41b201cdb99c3b7e98f478a\": container with ID starting with d79204913f61cb7559b40ae1a227f93072da548be41b201cdb99c3b7e98f478a not found: ID does not exist" Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.015108 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3079664b-2930-4138-91ee-0bb3b5d6c1fd" path="/var/lib/kubelet/pods/3079664b-2930-4138-91ee-0bb3b5d6c1fd/volumes" Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.710005 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.855985 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ttsl\" (UniqueName: \"kubernetes.io/projected/058bf0b5-6899-4a5d-a098-e40b52cfd512-kube-api-access-2ttsl\") pod \"058bf0b5-6899-4a5d-a098-e40b52cfd512\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.856117 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-fernet-keys\") pod \"058bf0b5-6899-4a5d-a098-e40b52cfd512\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.856353 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-config-data\") pod \"058bf0b5-6899-4a5d-a098-e40b52cfd512\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.856412 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-combined-ca-bundle\") pod \"058bf0b5-6899-4a5d-a098-e40b52cfd512\" (UID: \"058bf0b5-6899-4a5d-a098-e40b52cfd512\") " Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.861633 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "058bf0b5-6899-4a5d-a098-e40b52cfd512" (UID: "058bf0b5-6899-4a5d-a098-e40b52cfd512"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.866117 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058bf0b5-6899-4a5d-a098-e40b52cfd512-kube-api-access-2ttsl" (OuterVolumeSpecName: "kube-api-access-2ttsl") pod "058bf0b5-6899-4a5d-a098-e40b52cfd512" (UID: "058bf0b5-6899-4a5d-a098-e40b52cfd512"). InnerVolumeSpecName "kube-api-access-2ttsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.885119 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "058bf0b5-6899-4a5d-a098-e40b52cfd512" (UID: "058bf0b5-6899-4a5d-a098-e40b52cfd512"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.919488 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-config-data" (OuterVolumeSpecName: "config-data") pod "058bf0b5-6899-4a5d-a098-e40b52cfd512" (UID: "058bf0b5-6899-4a5d-a098-e40b52cfd512"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.959102 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ttsl\" (UniqueName: \"kubernetes.io/projected/058bf0b5-6899-4a5d-a098-e40b52cfd512-kube-api-access-2ttsl\") on node \"crc\" DevicePath \"\"" Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.959450 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.959577 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:01:04 crc kubenswrapper[4772]: I1128 12:01:04.959691 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058bf0b5-6899-4a5d-a098-e40b52cfd512-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:01:05 crc kubenswrapper[4772]: I1128 12:01:05.322141 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405521-g4n7r" event={"ID":"058bf0b5-6899-4a5d-a098-e40b52cfd512","Type":"ContainerDied","Data":"079f74f4fff8affa54c1cdb465b75f838072c17412c51c79a5bda748ff2e90a8"} Nov 28 12:01:05 crc kubenswrapper[4772]: I1128 12:01:05.322178 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="079f74f4fff8affa54c1cdb465b75f838072c17412c51c79a5bda748ff2e90a8" Nov 28 12:01:05 crc kubenswrapper[4772]: I1128 12:01:05.322193 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405521-g4n7r" Nov 28 12:01:12 crc kubenswrapper[4772]: I1128 12:01:12.000268 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:01:12 crc kubenswrapper[4772]: E1128 12:01:12.000956 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:01:22 crc kubenswrapper[4772]: I1128 12:01:22.994925 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:01:22 crc kubenswrapper[4772]: E1128 12:01:22.995753 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:01:33 crc kubenswrapper[4772]: I1128 12:01:33.994977 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:01:33 crc kubenswrapper[4772]: E1128 12:01:33.995837 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:01:44 crc kubenswrapper[4772]: I1128 12:01:44.995287 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:01:44 crc kubenswrapper[4772]: E1128 12:01:44.996652 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:01:58 crc kubenswrapper[4772]: I1128 12:01:58.995075 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:01:58 crc kubenswrapper[4772]: E1128 12:01:58.996339 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:02:12 crc kubenswrapper[4772]: I1128 12:02:12.994290 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:02:12 crc kubenswrapper[4772]: E1128 12:02:12.995290 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:02:27 crc kubenswrapper[4772]: I1128 12:02:27.995509 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:02:27 crc kubenswrapper[4772]: E1128 12:02:27.996734 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:02:40 crc kubenswrapper[4772]: I1128 12:02:40.994278 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:02:40 crc kubenswrapper[4772]: E1128 12:02:40.995835 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:02:52 crc kubenswrapper[4772]: I1128 12:02:52.994820 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:02:52 crc kubenswrapper[4772]: E1128 12:02:52.996275 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:03:05 crc kubenswrapper[4772]: I1128 12:03:05.995258 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:03:05 crc kubenswrapper[4772]: E1128 12:03:05.996664 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:03:18 crc kubenswrapper[4772]: I1128 12:03:18.995966 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:03:18 crc kubenswrapper[4772]: E1128 12:03:18.997240 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:03:30 crc kubenswrapper[4772]: I1128 12:03:30.002054 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:03:30 crc kubenswrapper[4772]: E1128 12:03:30.004304 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:03:42 crc kubenswrapper[4772]: I1128 12:03:42.007991 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:03:42 crc kubenswrapper[4772]: E1128 12:03:42.009518 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.215213 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rcp9p"] Nov 28 12:03:47 crc kubenswrapper[4772]: E1128 12:03:47.223976 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3079664b-2930-4138-91ee-0bb3b5d6c1fd" containerName="registry-server" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.224002 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3079664b-2930-4138-91ee-0bb3b5d6c1fd" containerName="registry-server" Nov 28 12:03:47 crc kubenswrapper[4772]: E1128 12:03:47.224016 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a68119-6eea-4860-be17-249edae83008" containerName="registry-server" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.224023 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a68119-6eea-4860-be17-249edae83008" containerName="registry-server" Nov 28 12:03:47 crc kubenswrapper[4772]: E1128 12:03:47.224048 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3079664b-2930-4138-91ee-0bb3b5d6c1fd" containerName="extract-utilities" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.224057 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3079664b-2930-4138-91ee-0bb3b5d6c1fd" containerName="extract-utilities" Nov 28 12:03:47 crc kubenswrapper[4772]: E1128 12:03:47.224071 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3079664b-2930-4138-91ee-0bb3b5d6c1fd" containerName="extract-content" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.224079 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3079664b-2930-4138-91ee-0bb3b5d6c1fd" containerName="extract-content" Nov 28 12:03:47 crc kubenswrapper[4772]: E1128 12:03:47.224102 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058bf0b5-6899-4a5d-a098-e40b52cfd512" containerName="keystone-cron" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.224109 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="058bf0b5-6899-4a5d-a098-e40b52cfd512" containerName="keystone-cron" Nov 28 12:03:47 crc kubenswrapper[4772]: E1128 12:03:47.224132 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a68119-6eea-4860-be17-249edae83008" containerName="extract-utilities" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.224139 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a68119-6eea-4860-be17-249edae83008" containerName="extract-utilities" Nov 28 12:03:47 crc kubenswrapper[4772]: E1128 12:03:47.224151 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a68119-6eea-4860-be17-249edae83008" containerName="extract-content" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.224158 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a68119-6eea-4860-be17-249edae83008" containerName="extract-content" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.224391 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3079664b-2930-4138-91ee-0bb3b5d6c1fd" containerName="registry-server" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.224430 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a68119-6eea-4860-be17-249edae83008" containerName="registry-server" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.224440 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="058bf0b5-6899-4a5d-a098-e40b52cfd512" containerName="keystone-cron" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.226311 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.241886 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rcp9p"] Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.419111 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-utilities\") pod \"certified-operators-rcp9p\" (UID: \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\") " pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.419228 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-catalog-content\") pod \"certified-operators-rcp9p\" (UID: \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\") " pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.419267 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqnpj\" (UniqueName: \"kubernetes.io/projected/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-kube-api-access-wqnpj\") pod \"certified-operators-rcp9p\" (UID: \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\") " pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.521312 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-catalog-content\") pod \"certified-operators-rcp9p\" (UID: \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\") " pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.521387 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqnpj\" (UniqueName: \"kubernetes.io/projected/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-kube-api-access-wqnpj\") pod \"certified-operators-rcp9p\" (UID: \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\") " pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.521454 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-utilities\") pod \"certified-operators-rcp9p\" (UID: \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\") " pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.521850 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-catalog-content\") pod \"certified-operators-rcp9p\" (UID: \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\") " pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.521861 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-utilities\") pod \"certified-operators-rcp9p\" (UID: \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\") " pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.540515 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqnpj\" (UniqueName: \"kubernetes.io/projected/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-kube-api-access-wqnpj\") pod \"certified-operators-rcp9p\" (UID: \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\") " pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:47 crc kubenswrapper[4772]: I1128 12:03:47.578489 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:48 crc kubenswrapper[4772]: I1128 12:03:48.064193 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rcp9p"] Nov 28 12:03:48 crc kubenswrapper[4772]: I1128 12:03:48.357495 4772 generic.go:334] "Generic (PLEG): container finished" podID="ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" containerID="117cd69b5bdaf3fe8c919072e50df16be9eade66458f0425b2c2392529532a07" exitCode=0 Nov 28 12:03:48 crc kubenswrapper[4772]: I1128 12:03:48.357840 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcp9p" event={"ID":"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b","Type":"ContainerDied","Data":"117cd69b5bdaf3fe8c919072e50df16be9eade66458f0425b2c2392529532a07"} Nov 28 12:03:48 crc kubenswrapper[4772]: I1128 12:03:48.357872 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcp9p" event={"ID":"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b","Type":"ContainerStarted","Data":"d85a1b3cdaf97448bb6545838b2a61a8faf9ad011a7c57d98ad6e565d71f5f3a"} Nov 28 12:03:49 crc kubenswrapper[4772]: I1128 12:03:49.369824 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcp9p" event={"ID":"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b","Type":"ContainerStarted","Data":"306ec5182dd3927090de540b3ce9883382156c1538772ab0d392d90dd2f5d03d"} Nov 28 12:03:50 crc kubenswrapper[4772]: I1128 12:03:50.384803 4772 generic.go:334] "Generic (PLEG): container finished" podID="ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" containerID="306ec5182dd3927090de540b3ce9883382156c1538772ab0d392d90dd2f5d03d" exitCode=0 Nov 28 12:03:50 crc kubenswrapper[4772]: I1128 12:03:50.384938 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcp9p" event={"ID":"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b","Type":"ContainerDied","Data":"306ec5182dd3927090de540b3ce9883382156c1538772ab0d392d90dd2f5d03d"} Nov 28 12:03:51 crc kubenswrapper[4772]: I1128 12:03:51.398717 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcp9p" event={"ID":"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b","Type":"ContainerStarted","Data":"787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269"} Nov 28 12:03:51 crc kubenswrapper[4772]: I1128 12:03:51.425008 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rcp9p" podStartSLOduration=1.9604373210000001 podStartE2EDuration="4.424987373s" podCreationTimestamp="2025-11-28 12:03:47 +0000 UTC" firstStartedPulling="2025-11-28 12:03:48.359513796 +0000 UTC m=+3426.682757023" lastFinishedPulling="2025-11-28 12:03:50.824063848 +0000 UTC m=+3429.147307075" observedRunningTime="2025-11-28 12:03:51.422671431 +0000 UTC m=+3429.745914698" watchObservedRunningTime="2025-11-28 12:03:51.424987373 +0000 UTC m=+3429.748230600" Nov 28 12:03:56 crc kubenswrapper[4772]: I1128 12:03:56.995306 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:03:57 crc kubenswrapper[4772]: I1128 12:03:57.455014 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"d4675f58701a5faa01b0838065d19af68e76bbebaa0a8983faa08ad0061f280e"} Nov 28 12:03:57 crc kubenswrapper[4772]: I1128 12:03:57.581628 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:57 crc kubenswrapper[4772]: I1128 12:03:57.581675 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:57 crc kubenswrapper[4772]: I1128 12:03:57.638953 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:03:58 crc kubenswrapper[4772]: I1128 12:03:58.510640 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:04:00 crc kubenswrapper[4772]: I1128 12:04:00.597201 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rcp9p"] Nov 28 12:04:00 crc kubenswrapper[4772]: I1128 12:04:00.598145 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rcp9p" podUID="ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" containerName="registry-server" containerID="cri-o://787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269" gracePeriod=2 Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.103825 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.178580 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-utilities\") pod \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\" (UID: \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\") " Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.178641 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqnpj\" (UniqueName: \"kubernetes.io/projected/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-kube-api-access-wqnpj\") pod \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\" (UID: \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\") " Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.178724 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-catalog-content\") pod \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\" (UID: \"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b\") " Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.180245 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-utilities" (OuterVolumeSpecName: "utilities") pod "ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" (UID: "ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.185634 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-kube-api-access-wqnpj" (OuterVolumeSpecName: "kube-api-access-wqnpj") pod "ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" (UID: "ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b"). InnerVolumeSpecName "kube-api-access-wqnpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.244246 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" (UID: "ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.280350 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.280401 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqnpj\" (UniqueName: \"kubernetes.io/projected/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-kube-api-access-wqnpj\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.280414 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.498105 4772 generic.go:334] "Generic (PLEG): container finished" podID="ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" containerID="787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269" exitCode=0 Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.498152 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcp9p" event={"ID":"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b","Type":"ContainerDied","Data":"787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269"} Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.498182 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcp9p" event={"ID":"ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b","Type":"ContainerDied","Data":"d85a1b3cdaf97448bb6545838b2a61a8faf9ad011a7c57d98ad6e565d71f5f3a"} Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.498204 4772 scope.go:117] "RemoveContainer" containerID="787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.498340 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcp9p" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.531884 4772 scope.go:117] "RemoveContainer" containerID="306ec5182dd3927090de540b3ce9883382156c1538772ab0d392d90dd2f5d03d" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.545569 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rcp9p"] Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.554671 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rcp9p"] Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.560834 4772 scope.go:117] "RemoveContainer" containerID="117cd69b5bdaf3fe8c919072e50df16be9eade66458f0425b2c2392529532a07" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.618636 4772 scope.go:117] "RemoveContainer" containerID="787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269" Nov 28 12:04:01 crc kubenswrapper[4772]: E1128 12:04:01.620684 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269\": container with ID starting with 787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269 not found: ID does not exist" containerID="787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.620731 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269"} err="failed to get container status \"787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269\": rpc error: code = NotFound desc = could not find container \"787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269\": container with ID starting with 787458d8895460a827f7cbd405aef994e2032a537370b928a1d08621bb59f269 not found: ID does not exist" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.620761 4772 scope.go:117] "RemoveContainer" containerID="306ec5182dd3927090de540b3ce9883382156c1538772ab0d392d90dd2f5d03d" Nov 28 12:04:01 crc kubenswrapper[4772]: E1128 12:04:01.621311 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"306ec5182dd3927090de540b3ce9883382156c1538772ab0d392d90dd2f5d03d\": container with ID starting with 306ec5182dd3927090de540b3ce9883382156c1538772ab0d392d90dd2f5d03d not found: ID does not exist" containerID="306ec5182dd3927090de540b3ce9883382156c1538772ab0d392d90dd2f5d03d" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.621353 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"306ec5182dd3927090de540b3ce9883382156c1538772ab0d392d90dd2f5d03d"} err="failed to get container status \"306ec5182dd3927090de540b3ce9883382156c1538772ab0d392d90dd2f5d03d\": rpc error: code = NotFound desc = could not find container \"306ec5182dd3927090de540b3ce9883382156c1538772ab0d392d90dd2f5d03d\": container with ID starting with 306ec5182dd3927090de540b3ce9883382156c1538772ab0d392d90dd2f5d03d not found: ID does not exist" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.621430 4772 scope.go:117] "RemoveContainer" containerID="117cd69b5bdaf3fe8c919072e50df16be9eade66458f0425b2c2392529532a07" Nov 28 12:04:01 crc kubenswrapper[4772]: E1128 12:04:01.621868 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117cd69b5bdaf3fe8c919072e50df16be9eade66458f0425b2c2392529532a07\": container with ID starting with 117cd69b5bdaf3fe8c919072e50df16be9eade66458f0425b2c2392529532a07 not found: ID does not exist" containerID="117cd69b5bdaf3fe8c919072e50df16be9eade66458f0425b2c2392529532a07" Nov 28 12:04:01 crc kubenswrapper[4772]: I1128 12:04:01.621907 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117cd69b5bdaf3fe8c919072e50df16be9eade66458f0425b2c2392529532a07"} err="failed to get container status \"117cd69b5bdaf3fe8c919072e50df16be9eade66458f0425b2c2392529532a07\": rpc error: code = NotFound desc = could not find container \"117cd69b5bdaf3fe8c919072e50df16be9eade66458f0425b2c2392529532a07\": container with ID starting with 117cd69b5bdaf3fe8c919072e50df16be9eade66458f0425b2c2392529532a07 not found: ID does not exist" Nov 28 12:04:02 crc kubenswrapper[4772]: I1128 12:04:02.011088 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" path="/var/lib/kubelet/pods/ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b/volumes" Nov 28 12:05:31 crc kubenswrapper[4772]: I1128 12:05:31.374214 4772 generic.go:334] "Generic (PLEG): container finished" podID="39592588-10c2-45fd-88fb-cb63f200c871" containerID="3768cd9511fb04c58cf75995b759bd2b8e5c4a3bb16e559199f85046519d287b" exitCode=0 Nov 28 12:05:31 crc kubenswrapper[4772]: I1128 12:05:31.374312 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"39592588-10c2-45fd-88fb-cb63f200c871","Type":"ContainerDied","Data":"3768cd9511fb04c58cf75995b759bd2b8e5c4a3bb16e559199f85046519d287b"} Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.848464 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.969880 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-ssh-key\") pod \"39592588-10c2-45fd-88fb-cb63f200c871\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.969985 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-ca-certs\") pod \"39592588-10c2-45fd-88fb-cb63f200c871\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.970027 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39592588-10c2-45fd-88fb-cb63f200c871-config-data\") pod \"39592588-10c2-45fd-88fb-cb63f200c871\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.970064 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm4w4\" (UniqueName: \"kubernetes.io/projected/39592588-10c2-45fd-88fb-cb63f200c871-kube-api-access-jm4w4\") pod \"39592588-10c2-45fd-88fb-cb63f200c871\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.970160 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39592588-10c2-45fd-88fb-cb63f200c871-openstack-config\") pod \"39592588-10c2-45fd-88fb-cb63f200c871\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.970254 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-openstack-config-secret\") pod \"39592588-10c2-45fd-88fb-cb63f200c871\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.970434 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"39592588-10c2-45fd-88fb-cb63f200c871\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.970811 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/39592588-10c2-45fd-88fb-cb63f200c871-test-operator-ephemeral-temporary\") pod \"39592588-10c2-45fd-88fb-cb63f200c871\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.970911 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/39592588-10c2-45fd-88fb-cb63f200c871-test-operator-ephemeral-workdir\") pod \"39592588-10c2-45fd-88fb-cb63f200c871\" (UID: \"39592588-10c2-45fd-88fb-cb63f200c871\") " Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.971652 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39592588-10c2-45fd-88fb-cb63f200c871-config-data" (OuterVolumeSpecName: "config-data") pod "39592588-10c2-45fd-88fb-cb63f200c871" (UID: "39592588-10c2-45fd-88fb-cb63f200c871"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.971808 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39592588-10c2-45fd-88fb-cb63f200c871-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "39592588-10c2-45fd-88fb-cb63f200c871" (UID: "39592588-10c2-45fd-88fb-cb63f200c871"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.972513 4772 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/39592588-10c2-45fd-88fb-cb63f200c871-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.972541 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39592588-10c2-45fd-88fb-cb63f200c871-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.977262 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39592588-10c2-45fd-88fb-cb63f200c871-kube-api-access-jm4w4" (OuterVolumeSpecName: "kube-api-access-jm4w4") pod "39592588-10c2-45fd-88fb-cb63f200c871" (UID: "39592588-10c2-45fd-88fb-cb63f200c871"). InnerVolumeSpecName "kube-api-access-jm4w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.977909 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "39592588-10c2-45fd-88fb-cb63f200c871" (UID: "39592588-10c2-45fd-88fb-cb63f200c871"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:05:32 crc kubenswrapper[4772]: I1128 12:05:32.981081 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39592588-10c2-45fd-88fb-cb63f200c871-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "39592588-10c2-45fd-88fb-cb63f200c871" (UID: "39592588-10c2-45fd-88fb-cb63f200c871"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.005560 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "39592588-10c2-45fd-88fb-cb63f200c871" (UID: "39592588-10c2-45fd-88fb-cb63f200c871"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.011689 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "39592588-10c2-45fd-88fb-cb63f200c871" (UID: "39592588-10c2-45fd-88fb-cb63f200c871"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.028692 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "39592588-10c2-45fd-88fb-cb63f200c871" (UID: "39592588-10c2-45fd-88fb-cb63f200c871"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.051468 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39592588-10c2-45fd-88fb-cb63f200c871-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "39592588-10c2-45fd-88fb-cb63f200c871" (UID: "39592588-10c2-45fd-88fb-cb63f200c871"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.073798 4772 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39592588-10c2-45fd-88fb-cb63f200c871-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.073834 4772 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.073866 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.073876 4772 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/39592588-10c2-45fd-88fb-cb63f200c871-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.073886 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.073896 4772 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/39592588-10c2-45fd-88fb-cb63f200c871-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.073904 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm4w4\" (UniqueName: \"kubernetes.io/projected/39592588-10c2-45fd-88fb-cb63f200c871-kube-api-access-jm4w4\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.094376 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.175025 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.405893 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"39592588-10c2-45fd-88fb-cb63f200c871","Type":"ContainerDied","Data":"6beb57b44bbce4109fd95cf04b431ca3d5c77669b690373a04b30b99bcae408d"} Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.405969 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6beb57b44bbce4109fd95cf04b431ca3d5c77669b690373a04b30b99bcae408d" Nov 28 12:05:33 crc kubenswrapper[4772]: I1128 12:05:33.406105 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.061054 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7m2d4"] Nov 28 12:05:36 crc kubenswrapper[4772]: E1128 12:05:36.061554 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" containerName="extract-content" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.061570 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" containerName="extract-content" Nov 28 12:05:36 crc kubenswrapper[4772]: E1128 12:05:36.061612 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" containerName="extract-utilities" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.061621 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" containerName="extract-utilities" Nov 28 12:05:36 crc kubenswrapper[4772]: E1128 12:05:36.061639 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" containerName="registry-server" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.061648 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" containerName="registry-server" Nov 28 12:05:36 crc kubenswrapper[4772]: E1128 12:05:36.061668 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39592588-10c2-45fd-88fb-cb63f200c871" containerName="tempest-tests-tempest-tests-runner" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.061678 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="39592588-10c2-45fd-88fb-cb63f200c871" containerName="tempest-tests-tempest-tests-runner" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.061944 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="39592588-10c2-45fd-88fb-cb63f200c871" containerName="tempest-tests-tempest-tests-runner" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.061978 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9e8c7e-43a8-49a5-af13-15aa6ad51b2b" containerName="registry-server" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.063662 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.076299 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7m2d4"] Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.136963 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2604294-a1a6-49e6-b8b9-da2b310cbff3-utilities\") pod \"redhat-operators-7m2d4\" (UID: \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\") " pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.137027 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk9pr\" (UniqueName: \"kubernetes.io/projected/a2604294-a1a6-49e6-b8b9-da2b310cbff3-kube-api-access-kk9pr\") pod \"redhat-operators-7m2d4\" (UID: \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\") " pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.137085 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2604294-a1a6-49e6-b8b9-da2b310cbff3-catalog-content\") pod \"redhat-operators-7m2d4\" (UID: \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\") " pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.238830 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2604294-a1a6-49e6-b8b9-da2b310cbff3-utilities\") pod \"redhat-operators-7m2d4\" (UID: \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\") " pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.238908 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk9pr\" (UniqueName: \"kubernetes.io/projected/a2604294-a1a6-49e6-b8b9-da2b310cbff3-kube-api-access-kk9pr\") pod \"redhat-operators-7m2d4\" (UID: \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\") " pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.238973 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2604294-a1a6-49e6-b8b9-da2b310cbff3-catalog-content\") pod \"redhat-operators-7m2d4\" (UID: \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\") " pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.239617 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2604294-a1a6-49e6-b8b9-da2b310cbff3-utilities\") pod \"redhat-operators-7m2d4\" (UID: \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\") " pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.239661 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2604294-a1a6-49e6-b8b9-da2b310cbff3-catalog-content\") pod \"redhat-operators-7m2d4\" (UID: \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\") " pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.257857 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk9pr\" (UniqueName: \"kubernetes.io/projected/a2604294-a1a6-49e6-b8b9-da2b310cbff3-kube-api-access-kk9pr\") pod \"redhat-operators-7m2d4\" (UID: \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\") " pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.394225 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:36 crc kubenswrapper[4772]: I1128 12:05:36.947628 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7m2d4"] Nov 28 12:05:37 crc kubenswrapper[4772]: I1128 12:05:37.487179 4772 generic.go:334] "Generic (PLEG): container finished" podID="a2604294-a1a6-49e6-b8b9-da2b310cbff3" containerID="cd8d328bf7cf688bde1d8e8066d8c2d52d05ce70b41f7da3232db2889ab16fb7" exitCode=0 Nov 28 12:05:37 crc kubenswrapper[4772]: I1128 12:05:37.487235 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m2d4" event={"ID":"a2604294-a1a6-49e6-b8b9-da2b310cbff3","Type":"ContainerDied","Data":"cd8d328bf7cf688bde1d8e8066d8c2d52d05ce70b41f7da3232db2889ab16fb7"} Nov 28 12:05:37 crc kubenswrapper[4772]: I1128 12:05:37.487538 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m2d4" event={"ID":"a2604294-a1a6-49e6-b8b9-da2b310cbff3","Type":"ContainerStarted","Data":"f92fff10ac5f12ee50b57c94961b19e4d14bd3d1fedccd761d46a210da5e1bed"} Nov 28 12:05:39 crc kubenswrapper[4772]: I1128 12:05:39.517476 4772 generic.go:334] "Generic (PLEG): container finished" podID="a2604294-a1a6-49e6-b8b9-da2b310cbff3" containerID="c3d79cdccb79de0c7f9ae5998deea89379ec2178bf31d28377b0a4c4daa32f8f" exitCode=0 Nov 28 12:05:39 crc kubenswrapper[4772]: I1128 12:05:39.517530 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m2d4" event={"ID":"a2604294-a1a6-49e6-b8b9-da2b310cbff3","Type":"ContainerDied","Data":"c3d79cdccb79de0c7f9ae5998deea89379ec2178bf31d28377b0a4c4daa32f8f"} Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.365849 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.367545 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.370868 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bx5kx" Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.387703 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.477194 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n4q6\" (UniqueName: \"kubernetes.io/projected/3e1800cb-ecd9-443b-b9a9-6437d8abbfc7-kube-api-access-5n4q6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3e1800cb-ecd9-443b-b9a9-6437d8abbfc7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.477396 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3e1800cb-ecd9-443b-b9a9-6437d8abbfc7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.532983 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m2d4" event={"ID":"a2604294-a1a6-49e6-b8b9-da2b310cbff3","Type":"ContainerStarted","Data":"dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8"} Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.559551 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7m2d4" podStartSLOduration=2.024793708 podStartE2EDuration="4.559528281s" podCreationTimestamp="2025-11-28 12:05:36 +0000 UTC" firstStartedPulling="2025-11-28 12:05:37.491548221 +0000 UTC m=+3535.814791468" lastFinishedPulling="2025-11-28 12:05:40.026282804 +0000 UTC m=+3538.349526041" observedRunningTime="2025-11-28 12:05:40.552941145 +0000 UTC m=+3538.876184382" watchObservedRunningTime="2025-11-28 12:05:40.559528281 +0000 UTC m=+3538.882771528" Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.579965 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3e1800cb-ecd9-443b-b9a9-6437d8abbfc7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.580108 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n4q6\" (UniqueName: \"kubernetes.io/projected/3e1800cb-ecd9-443b-b9a9-6437d8abbfc7-kube-api-access-5n4q6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3e1800cb-ecd9-443b-b9a9-6437d8abbfc7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.580697 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3e1800cb-ecd9-443b-b9a9-6437d8abbfc7\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.606350 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n4q6\" (UniqueName: \"kubernetes.io/projected/3e1800cb-ecd9-443b-b9a9-6437d8abbfc7-kube-api-access-5n4q6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3e1800cb-ecd9-443b-b9a9-6437d8abbfc7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.622057 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3e1800cb-ecd9-443b-b9a9-6437d8abbfc7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 12:05:40 crc kubenswrapper[4772]: I1128 12:05:40.694471 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 12:05:41 crc kubenswrapper[4772]: I1128 12:05:41.133329 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 12:05:41 crc kubenswrapper[4772]: I1128 12:05:41.546118 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3e1800cb-ecd9-443b-b9a9-6437d8abbfc7","Type":"ContainerStarted","Data":"d27ea16c37df8bf8424788ae9fbc8fa7e461999f975060c2812ec6ac4f767965"} Nov 28 12:05:43 crc kubenswrapper[4772]: I1128 12:05:43.580570 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3e1800cb-ecd9-443b-b9a9-6437d8abbfc7","Type":"ContainerStarted","Data":"feeedc76254deca42fc4a40ca64ac507a3a5603e8bd3ce5ef29a196a88091012"} Nov 28 12:05:43 crc kubenswrapper[4772]: I1128 12:05:43.599457 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.293235451 podStartE2EDuration="3.59943939s" podCreationTimestamp="2025-11-28 12:05:40 +0000 UTC" firstStartedPulling="2025-11-28 12:05:41.136880867 +0000 UTC m=+3539.460124104" lastFinishedPulling="2025-11-28 12:05:42.443084806 +0000 UTC m=+3540.766328043" observedRunningTime="2025-11-28 12:05:43.596408369 +0000 UTC m=+3541.919651596" watchObservedRunningTime="2025-11-28 12:05:43.59943939 +0000 UTC m=+3541.922682627" Nov 28 12:05:46 crc kubenswrapper[4772]: I1128 12:05:46.394542 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:46 crc kubenswrapper[4772]: I1128 12:05:46.394989 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:46 crc kubenswrapper[4772]: I1128 12:05:46.439829 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:46 crc kubenswrapper[4772]: I1128 12:05:46.662750 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:46 crc kubenswrapper[4772]: I1128 12:05:46.722409 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7m2d4"] Nov 28 12:05:48 crc kubenswrapper[4772]: I1128 12:05:48.633098 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7m2d4" podUID="a2604294-a1a6-49e6-b8b9-da2b310cbff3" containerName="registry-server" containerID="cri-o://dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8" gracePeriod=2 Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.263853 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.444272 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk9pr\" (UniqueName: \"kubernetes.io/projected/a2604294-a1a6-49e6-b8b9-da2b310cbff3-kube-api-access-kk9pr\") pod \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\" (UID: \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\") " Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.444588 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2604294-a1a6-49e6-b8b9-da2b310cbff3-utilities\") pod \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\" (UID: \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\") " Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.444767 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2604294-a1a6-49e6-b8b9-da2b310cbff3-catalog-content\") pod \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\" (UID: \"a2604294-a1a6-49e6-b8b9-da2b310cbff3\") " Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.445520 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2604294-a1a6-49e6-b8b9-da2b310cbff3-utilities" (OuterVolumeSpecName: "utilities") pod "a2604294-a1a6-49e6-b8b9-da2b310cbff3" (UID: "a2604294-a1a6-49e6-b8b9-da2b310cbff3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.454497 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2604294-a1a6-49e6-b8b9-da2b310cbff3-kube-api-access-kk9pr" (OuterVolumeSpecName: "kube-api-access-kk9pr") pod "a2604294-a1a6-49e6-b8b9-da2b310cbff3" (UID: "a2604294-a1a6-49e6-b8b9-da2b310cbff3"). InnerVolumeSpecName "kube-api-access-kk9pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.548007 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk9pr\" (UniqueName: \"kubernetes.io/projected/a2604294-a1a6-49e6-b8b9-da2b310cbff3-kube-api-access-kk9pr\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.548049 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2604294-a1a6-49e6-b8b9-da2b310cbff3-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.652203 4772 generic.go:334] "Generic (PLEG): container finished" podID="a2604294-a1a6-49e6-b8b9-da2b310cbff3" containerID="dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8" exitCode=0 Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.652271 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m2d4" event={"ID":"a2604294-a1a6-49e6-b8b9-da2b310cbff3","Type":"ContainerDied","Data":"dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8"} Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.652513 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m2d4" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.652883 4772 scope.go:117] "RemoveContainer" containerID="dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.653283 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m2d4" event={"ID":"a2604294-a1a6-49e6-b8b9-da2b310cbff3","Type":"ContainerDied","Data":"f92fff10ac5f12ee50b57c94961b19e4d14bd3d1fedccd761d46a210da5e1bed"} Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.684684 4772 scope.go:117] "RemoveContainer" containerID="c3d79cdccb79de0c7f9ae5998deea89379ec2178bf31d28377b0a4c4daa32f8f" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.720902 4772 scope.go:117] "RemoveContainer" containerID="cd8d328bf7cf688bde1d8e8066d8c2d52d05ce70b41f7da3232db2889ab16fb7" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.774931 4772 scope.go:117] "RemoveContainer" containerID="dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8" Nov 28 12:05:49 crc kubenswrapper[4772]: E1128 12:05:49.775918 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8\": container with ID starting with dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8 not found: ID does not exist" containerID="dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.775982 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8"} err="failed to get container status \"dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8\": rpc error: code = NotFound desc = could not find container \"dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8\": container with ID starting with dee8f5dc31db735a82874c438403ca3e0100f53c82d43e696bcace812e2a62a8 not found: ID does not exist" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.776028 4772 scope.go:117] "RemoveContainer" containerID="c3d79cdccb79de0c7f9ae5998deea89379ec2178bf31d28377b0a4c4daa32f8f" Nov 28 12:05:49 crc kubenswrapper[4772]: E1128 12:05:49.776582 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3d79cdccb79de0c7f9ae5998deea89379ec2178bf31d28377b0a4c4daa32f8f\": container with ID starting with c3d79cdccb79de0c7f9ae5998deea89379ec2178bf31d28377b0a4c4daa32f8f not found: ID does not exist" containerID="c3d79cdccb79de0c7f9ae5998deea89379ec2178bf31d28377b0a4c4daa32f8f" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.776633 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3d79cdccb79de0c7f9ae5998deea89379ec2178bf31d28377b0a4c4daa32f8f"} err="failed to get container status \"c3d79cdccb79de0c7f9ae5998deea89379ec2178bf31d28377b0a4c4daa32f8f\": rpc error: code = NotFound desc = could not find container \"c3d79cdccb79de0c7f9ae5998deea89379ec2178bf31d28377b0a4c4daa32f8f\": container with ID starting with c3d79cdccb79de0c7f9ae5998deea89379ec2178bf31d28377b0a4c4daa32f8f not found: ID does not exist" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.776671 4772 scope.go:117] "RemoveContainer" containerID="cd8d328bf7cf688bde1d8e8066d8c2d52d05ce70b41f7da3232db2889ab16fb7" Nov 28 12:05:49 crc kubenswrapper[4772]: E1128 12:05:49.777323 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd8d328bf7cf688bde1d8e8066d8c2d52d05ce70b41f7da3232db2889ab16fb7\": container with ID starting with cd8d328bf7cf688bde1d8e8066d8c2d52d05ce70b41f7da3232db2889ab16fb7 not found: ID does not exist" containerID="cd8d328bf7cf688bde1d8e8066d8c2d52d05ce70b41f7da3232db2889ab16fb7" Nov 28 12:05:49 crc kubenswrapper[4772]: I1128 12:05:49.777428 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd8d328bf7cf688bde1d8e8066d8c2d52d05ce70b41f7da3232db2889ab16fb7"} err="failed to get container status \"cd8d328bf7cf688bde1d8e8066d8c2d52d05ce70b41f7da3232db2889ab16fb7\": rpc error: code = NotFound desc = could not find container \"cd8d328bf7cf688bde1d8e8066d8c2d52d05ce70b41f7da3232db2889ab16fb7\": container with ID starting with cd8d328bf7cf688bde1d8e8066d8c2d52d05ce70b41f7da3232db2889ab16fb7 not found: ID does not exist" Nov 28 12:05:50 crc kubenswrapper[4772]: I1128 12:05:50.663988 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2604294-a1a6-49e6-b8b9-da2b310cbff3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2604294-a1a6-49e6-b8b9-da2b310cbff3" (UID: "a2604294-a1a6-49e6-b8b9-da2b310cbff3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:05:50 crc kubenswrapper[4772]: I1128 12:05:50.672072 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2604294-a1a6-49e6-b8b9-da2b310cbff3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:50 crc kubenswrapper[4772]: I1128 12:05:50.886572 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7m2d4"] Nov 28 12:05:50 crc kubenswrapper[4772]: I1128 12:05:50.894614 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7m2d4"] Nov 28 12:05:52 crc kubenswrapper[4772]: I1128 12:05:52.025192 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2604294-a1a6-49e6-b8b9-da2b310cbff3" path="/var/lib/kubelet/pods/a2604294-a1a6-49e6-b8b9-da2b310cbff3/volumes" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.112326 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hj8n5/must-gather-w24r7"] Nov 28 12:06:05 crc kubenswrapper[4772]: E1128 12:06:05.114154 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2604294-a1a6-49e6-b8b9-da2b310cbff3" containerName="registry-server" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.114176 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2604294-a1a6-49e6-b8b9-da2b310cbff3" containerName="registry-server" Nov 28 12:06:05 crc kubenswrapper[4772]: E1128 12:06:05.114223 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2604294-a1a6-49e6-b8b9-da2b310cbff3" containerName="extract-content" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.114235 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2604294-a1a6-49e6-b8b9-da2b310cbff3" containerName="extract-content" Nov 28 12:06:05 crc kubenswrapper[4772]: E1128 12:06:05.114250 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2604294-a1a6-49e6-b8b9-da2b310cbff3" containerName="extract-utilities" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.114259 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2604294-a1a6-49e6-b8b9-da2b310cbff3" containerName="extract-utilities" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.114616 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2604294-a1a6-49e6-b8b9-da2b310cbff3" containerName="registry-server" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.116470 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/must-gather-w24r7" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.120382 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hj8n5"/"kube-root-ca.crt" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.120440 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hj8n5/must-gather-w24r7"] Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.120655 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hj8n5"/"default-dockercfg-qrdb7" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.121541 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hj8n5"/"openshift-service-ca.crt" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.274599 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/611ce4cc-b675-496a-b265-189252ce3818-must-gather-output\") pod \"must-gather-w24r7\" (UID: \"611ce4cc-b675-496a-b265-189252ce3818\") " pod="openshift-must-gather-hj8n5/must-gather-w24r7" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.274736 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx924\" (UniqueName: \"kubernetes.io/projected/611ce4cc-b675-496a-b265-189252ce3818-kube-api-access-xx924\") pod \"must-gather-w24r7\" (UID: \"611ce4cc-b675-496a-b265-189252ce3818\") " pod="openshift-must-gather-hj8n5/must-gather-w24r7" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.376808 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/611ce4cc-b675-496a-b265-189252ce3818-must-gather-output\") pod \"must-gather-w24r7\" (UID: \"611ce4cc-b675-496a-b265-189252ce3818\") " pod="openshift-must-gather-hj8n5/must-gather-w24r7" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.376935 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx924\" (UniqueName: \"kubernetes.io/projected/611ce4cc-b675-496a-b265-189252ce3818-kube-api-access-xx924\") pod \"must-gather-w24r7\" (UID: \"611ce4cc-b675-496a-b265-189252ce3818\") " pod="openshift-must-gather-hj8n5/must-gather-w24r7" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.377201 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/611ce4cc-b675-496a-b265-189252ce3818-must-gather-output\") pod \"must-gather-w24r7\" (UID: \"611ce4cc-b675-496a-b265-189252ce3818\") " pod="openshift-must-gather-hj8n5/must-gather-w24r7" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.397382 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx924\" (UniqueName: \"kubernetes.io/projected/611ce4cc-b675-496a-b265-189252ce3818-kube-api-access-xx924\") pod \"must-gather-w24r7\" (UID: \"611ce4cc-b675-496a-b265-189252ce3818\") " pod="openshift-must-gather-hj8n5/must-gather-w24r7" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.440511 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/must-gather-w24r7" Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.963058 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 12:06:05 crc kubenswrapper[4772]: I1128 12:06:05.969506 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hj8n5/must-gather-w24r7"] Nov 28 12:06:06 crc kubenswrapper[4772]: I1128 12:06:06.868715 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hj8n5/must-gather-w24r7" event={"ID":"611ce4cc-b675-496a-b265-189252ce3818","Type":"ContainerStarted","Data":"6a583128951adc0dbb802901bae7102189775c733696c40ddbc384e7beb87040"} Nov 28 12:06:12 crc kubenswrapper[4772]: I1128 12:06:12.950993 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hj8n5/must-gather-w24r7" event={"ID":"611ce4cc-b675-496a-b265-189252ce3818","Type":"ContainerStarted","Data":"8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248"} Nov 28 12:06:13 crc kubenswrapper[4772]: I1128 12:06:13.960906 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hj8n5/must-gather-w24r7" event={"ID":"611ce4cc-b675-496a-b265-189252ce3818","Type":"ContainerStarted","Data":"337870245d3256bae378d4018ea10946dbaf5c3436c7523c887505612761dc45"} Nov 28 12:06:13 crc kubenswrapper[4772]: I1128 12:06:13.995275 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hj8n5/must-gather-w24r7" podStartSLOduration=2.808648126 podStartE2EDuration="8.995241297s" podCreationTimestamp="2025-11-28 12:06:05 +0000 UTC" firstStartedPulling="2025-11-28 12:06:05.962730467 +0000 UTC m=+3564.285973684" lastFinishedPulling="2025-11-28 12:06:12.149323628 +0000 UTC m=+3570.472566855" observedRunningTime="2025-11-28 12:06:13.980499323 +0000 UTC m=+3572.303742590" watchObservedRunningTime="2025-11-28 12:06:13.995241297 +0000 UTC m=+3572.318484564" Nov 28 12:06:15 crc kubenswrapper[4772]: I1128 12:06:15.977696 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hj8n5/crc-debug-r4zxc"] Nov 28 12:06:15 crc kubenswrapper[4772]: I1128 12:06:15.978967 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" Nov 28 12:06:16 crc kubenswrapper[4772]: I1128 12:06:16.129511 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3d963ba-cdf6-45a8-af39-b5617c50f5ee-host\") pod \"crc-debug-r4zxc\" (UID: \"e3d963ba-cdf6-45a8-af39-b5617c50f5ee\") " pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" Nov 28 12:06:16 crc kubenswrapper[4772]: I1128 12:06:16.129615 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67h7c\" (UniqueName: \"kubernetes.io/projected/e3d963ba-cdf6-45a8-af39-b5617c50f5ee-kube-api-access-67h7c\") pod \"crc-debug-r4zxc\" (UID: \"e3d963ba-cdf6-45a8-af39-b5617c50f5ee\") " pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" Nov 28 12:06:16 crc kubenswrapper[4772]: I1128 12:06:16.231037 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3d963ba-cdf6-45a8-af39-b5617c50f5ee-host\") pod \"crc-debug-r4zxc\" (UID: \"e3d963ba-cdf6-45a8-af39-b5617c50f5ee\") " pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" Nov 28 12:06:16 crc kubenswrapper[4772]: I1128 12:06:16.231107 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67h7c\" (UniqueName: \"kubernetes.io/projected/e3d963ba-cdf6-45a8-af39-b5617c50f5ee-kube-api-access-67h7c\") pod \"crc-debug-r4zxc\" (UID: \"e3d963ba-cdf6-45a8-af39-b5617c50f5ee\") " pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" Nov 28 12:06:16 crc kubenswrapper[4772]: I1128 12:06:16.231179 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3d963ba-cdf6-45a8-af39-b5617c50f5ee-host\") pod \"crc-debug-r4zxc\" (UID: \"e3d963ba-cdf6-45a8-af39-b5617c50f5ee\") " pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" Nov 28 12:06:16 crc kubenswrapper[4772]: I1128 12:06:16.249465 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67h7c\" (UniqueName: \"kubernetes.io/projected/e3d963ba-cdf6-45a8-af39-b5617c50f5ee-kube-api-access-67h7c\") pod \"crc-debug-r4zxc\" (UID: \"e3d963ba-cdf6-45a8-af39-b5617c50f5ee\") " pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" Nov 28 12:06:16 crc kubenswrapper[4772]: I1128 12:06:16.306492 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" Nov 28 12:06:16 crc kubenswrapper[4772]: W1128 12:06:16.335186 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3d963ba_cdf6_45a8_af39_b5617c50f5ee.slice/crio-4d2723b9d5ea7e59e8e34590a2d28af0602307727cbdd3c741ef3dec474c63f0 WatchSource:0}: Error finding container 4d2723b9d5ea7e59e8e34590a2d28af0602307727cbdd3c741ef3dec474c63f0: Status 404 returned error can't find the container with id 4d2723b9d5ea7e59e8e34590a2d28af0602307727cbdd3c741ef3dec474c63f0 Nov 28 12:06:16 crc kubenswrapper[4772]: I1128 12:06:16.985453 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" event={"ID":"e3d963ba-cdf6-45a8-af39-b5617c50f5ee","Type":"ContainerStarted","Data":"4d2723b9d5ea7e59e8e34590a2d28af0602307727cbdd3c741ef3dec474c63f0"} Nov 28 12:06:23 crc kubenswrapper[4772]: I1128 12:06:23.895878 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:06:23 crc kubenswrapper[4772]: I1128 12:06:23.896528 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:06:28 crc kubenswrapper[4772]: I1128 12:06:28.093224 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" event={"ID":"e3d963ba-cdf6-45a8-af39-b5617c50f5ee","Type":"ContainerStarted","Data":"60525dc274f70561bc50763d6e87bb6910db54aa1f6e2679ab5c5b7d8fd03529"} Nov 28 12:06:28 crc kubenswrapper[4772]: I1128 12:06:28.117229 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" podStartSLOduration=1.789624917 podStartE2EDuration="13.117199534s" podCreationTimestamp="2025-11-28 12:06:15 +0000 UTC" firstStartedPulling="2025-11-28 12:06:16.336622943 +0000 UTC m=+3574.659866170" lastFinishedPulling="2025-11-28 12:06:27.66419756 +0000 UTC m=+3585.987440787" observedRunningTime="2025-11-28 12:06:28.108711427 +0000 UTC m=+3586.431954674" watchObservedRunningTime="2025-11-28 12:06:28.117199534 +0000 UTC m=+3586.440442761" Nov 28 12:06:53 crc kubenswrapper[4772]: I1128 12:06:53.896682 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:06:53 crc kubenswrapper[4772]: I1128 12:06:53.897587 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:07:05 crc kubenswrapper[4772]: I1128 12:07:05.478710 4772 generic.go:334] "Generic (PLEG): container finished" podID="e3d963ba-cdf6-45a8-af39-b5617c50f5ee" containerID="60525dc274f70561bc50763d6e87bb6910db54aa1f6e2679ab5c5b7d8fd03529" exitCode=0 Nov 28 12:07:05 crc kubenswrapper[4772]: I1128 12:07:05.478814 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" event={"ID":"e3d963ba-cdf6-45a8-af39-b5617c50f5ee","Type":"ContainerDied","Data":"60525dc274f70561bc50763d6e87bb6910db54aa1f6e2679ab5c5b7d8fd03529"} Nov 28 12:07:06 crc kubenswrapper[4772]: I1128 12:07:06.600283 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" Nov 28 12:07:06 crc kubenswrapper[4772]: I1128 12:07:06.647676 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hj8n5/crc-debug-r4zxc"] Nov 28 12:07:06 crc kubenswrapper[4772]: I1128 12:07:06.654157 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hj8n5/crc-debug-r4zxc"] Nov 28 12:07:06 crc kubenswrapper[4772]: I1128 12:07:06.759199 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3d963ba-cdf6-45a8-af39-b5617c50f5ee-host\") pod \"e3d963ba-cdf6-45a8-af39-b5617c50f5ee\" (UID: \"e3d963ba-cdf6-45a8-af39-b5617c50f5ee\") " Nov 28 12:07:06 crc kubenswrapper[4772]: I1128 12:07:06.759285 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3d963ba-cdf6-45a8-af39-b5617c50f5ee-host" (OuterVolumeSpecName: "host") pod "e3d963ba-cdf6-45a8-af39-b5617c50f5ee" (UID: "e3d963ba-cdf6-45a8-af39-b5617c50f5ee"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:07:06 crc kubenswrapper[4772]: I1128 12:07:06.759498 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67h7c\" (UniqueName: \"kubernetes.io/projected/e3d963ba-cdf6-45a8-af39-b5617c50f5ee-kube-api-access-67h7c\") pod \"e3d963ba-cdf6-45a8-af39-b5617c50f5ee\" (UID: \"e3d963ba-cdf6-45a8-af39-b5617c50f5ee\") " Nov 28 12:07:06 crc kubenswrapper[4772]: I1128 12:07:06.759910 4772 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3d963ba-cdf6-45a8-af39-b5617c50f5ee-host\") on node \"crc\" DevicePath \"\"" Nov 28 12:07:06 crc kubenswrapper[4772]: I1128 12:07:06.769448 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d963ba-cdf6-45a8-af39-b5617c50f5ee-kube-api-access-67h7c" (OuterVolumeSpecName: "kube-api-access-67h7c") pod "e3d963ba-cdf6-45a8-af39-b5617c50f5ee" (UID: "e3d963ba-cdf6-45a8-af39-b5617c50f5ee"). InnerVolumeSpecName "kube-api-access-67h7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:07:06 crc kubenswrapper[4772]: I1128 12:07:06.862507 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67h7c\" (UniqueName: \"kubernetes.io/projected/e3d963ba-cdf6-45a8-af39-b5617c50f5ee-kube-api-access-67h7c\") on node \"crc\" DevicePath \"\"" Nov 28 12:07:07 crc kubenswrapper[4772]: I1128 12:07:07.515035 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d2723b9d5ea7e59e8e34590a2d28af0602307727cbdd3c741ef3dec474c63f0" Nov 28 12:07:07 crc kubenswrapper[4772]: I1128 12:07:07.515113 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/crc-debug-r4zxc" Nov 28 12:07:07 crc kubenswrapper[4772]: I1128 12:07:07.838927 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hj8n5/crc-debug-bj5vx"] Nov 28 12:07:07 crc kubenswrapper[4772]: E1128 12:07:07.839401 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d963ba-cdf6-45a8-af39-b5617c50f5ee" containerName="container-00" Nov 28 12:07:07 crc kubenswrapper[4772]: I1128 12:07:07.839421 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d963ba-cdf6-45a8-af39-b5617c50f5ee" containerName="container-00" Nov 28 12:07:07 crc kubenswrapper[4772]: I1128 12:07:07.839673 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d963ba-cdf6-45a8-af39-b5617c50f5ee" containerName="container-00" Nov 28 12:07:07 crc kubenswrapper[4772]: I1128 12:07:07.840476 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/crc-debug-bj5vx" Nov 28 12:07:07 crc kubenswrapper[4772]: I1128 12:07:07.985837 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3867b631-e44d-4ac0-96fa-e5de587cd933-host\") pod \"crc-debug-bj5vx\" (UID: \"3867b631-e44d-4ac0-96fa-e5de587cd933\") " pod="openshift-must-gather-hj8n5/crc-debug-bj5vx" Nov 28 12:07:07 crc kubenswrapper[4772]: I1128 12:07:07.985984 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbblr\" (UniqueName: \"kubernetes.io/projected/3867b631-e44d-4ac0-96fa-e5de587cd933-kube-api-access-jbblr\") pod \"crc-debug-bj5vx\" (UID: \"3867b631-e44d-4ac0-96fa-e5de587cd933\") " pod="openshift-must-gather-hj8n5/crc-debug-bj5vx" Nov 28 12:07:08 crc kubenswrapper[4772]: I1128 12:07:08.008401 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d963ba-cdf6-45a8-af39-b5617c50f5ee" path="/var/lib/kubelet/pods/e3d963ba-cdf6-45a8-af39-b5617c50f5ee/volumes" Nov 28 12:07:08 crc kubenswrapper[4772]: I1128 12:07:08.087644 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3867b631-e44d-4ac0-96fa-e5de587cd933-host\") pod \"crc-debug-bj5vx\" (UID: \"3867b631-e44d-4ac0-96fa-e5de587cd933\") " pod="openshift-must-gather-hj8n5/crc-debug-bj5vx" Nov 28 12:07:08 crc kubenswrapper[4772]: I1128 12:07:08.087802 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbblr\" (UniqueName: \"kubernetes.io/projected/3867b631-e44d-4ac0-96fa-e5de587cd933-kube-api-access-jbblr\") pod \"crc-debug-bj5vx\" (UID: \"3867b631-e44d-4ac0-96fa-e5de587cd933\") " pod="openshift-must-gather-hj8n5/crc-debug-bj5vx" Nov 28 12:07:08 crc kubenswrapper[4772]: I1128 12:07:08.087944 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3867b631-e44d-4ac0-96fa-e5de587cd933-host\") pod \"crc-debug-bj5vx\" (UID: \"3867b631-e44d-4ac0-96fa-e5de587cd933\") " pod="openshift-must-gather-hj8n5/crc-debug-bj5vx" Nov 28 12:07:08 crc kubenswrapper[4772]: I1128 12:07:08.163093 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbblr\" (UniqueName: \"kubernetes.io/projected/3867b631-e44d-4ac0-96fa-e5de587cd933-kube-api-access-jbblr\") pod \"crc-debug-bj5vx\" (UID: \"3867b631-e44d-4ac0-96fa-e5de587cd933\") " pod="openshift-must-gather-hj8n5/crc-debug-bj5vx" Nov 28 12:07:08 crc kubenswrapper[4772]: I1128 12:07:08.166016 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/crc-debug-bj5vx" Nov 28 12:07:08 crc kubenswrapper[4772]: I1128 12:07:08.525669 4772 generic.go:334] "Generic (PLEG): container finished" podID="3867b631-e44d-4ac0-96fa-e5de587cd933" containerID="22695375b583bcb18e06e379e1ed552c0ae67790a068bccfc22e492b049fd474" exitCode=0 Nov 28 12:07:08 crc kubenswrapper[4772]: I1128 12:07:08.525710 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hj8n5/crc-debug-bj5vx" event={"ID":"3867b631-e44d-4ac0-96fa-e5de587cd933","Type":"ContainerDied","Data":"22695375b583bcb18e06e379e1ed552c0ae67790a068bccfc22e492b049fd474"} Nov 28 12:07:08 crc kubenswrapper[4772]: I1128 12:07:08.525734 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hj8n5/crc-debug-bj5vx" event={"ID":"3867b631-e44d-4ac0-96fa-e5de587cd933","Type":"ContainerStarted","Data":"fcb634ef15772e2703758fd41fd37c37d326df95bb7dd28f9f622f1615476050"} Nov 28 12:07:09 crc kubenswrapper[4772]: I1128 12:07:09.117894 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hj8n5/crc-debug-bj5vx"] Nov 28 12:07:09 crc kubenswrapper[4772]: I1128 12:07:09.125966 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hj8n5/crc-debug-bj5vx"] Nov 28 12:07:09 crc kubenswrapper[4772]: I1128 12:07:09.657394 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/crc-debug-bj5vx" Nov 28 12:07:09 crc kubenswrapper[4772]: I1128 12:07:09.823701 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbblr\" (UniqueName: \"kubernetes.io/projected/3867b631-e44d-4ac0-96fa-e5de587cd933-kube-api-access-jbblr\") pod \"3867b631-e44d-4ac0-96fa-e5de587cd933\" (UID: \"3867b631-e44d-4ac0-96fa-e5de587cd933\") " Nov 28 12:07:09 crc kubenswrapper[4772]: I1128 12:07:09.823783 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3867b631-e44d-4ac0-96fa-e5de587cd933-host\") pod \"3867b631-e44d-4ac0-96fa-e5de587cd933\" (UID: \"3867b631-e44d-4ac0-96fa-e5de587cd933\") " Nov 28 12:07:09 crc kubenswrapper[4772]: I1128 12:07:09.823921 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3867b631-e44d-4ac0-96fa-e5de587cd933-host" (OuterVolumeSpecName: "host") pod "3867b631-e44d-4ac0-96fa-e5de587cd933" (UID: "3867b631-e44d-4ac0-96fa-e5de587cd933"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:07:09 crc kubenswrapper[4772]: I1128 12:07:09.824236 4772 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3867b631-e44d-4ac0-96fa-e5de587cd933-host\") on node \"crc\" DevicePath \"\"" Nov 28 12:07:09 crc kubenswrapper[4772]: I1128 12:07:09.831623 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3867b631-e44d-4ac0-96fa-e5de587cd933-kube-api-access-jbblr" (OuterVolumeSpecName: "kube-api-access-jbblr") pod "3867b631-e44d-4ac0-96fa-e5de587cd933" (UID: "3867b631-e44d-4ac0-96fa-e5de587cd933"). InnerVolumeSpecName "kube-api-access-jbblr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:07:09 crc kubenswrapper[4772]: I1128 12:07:09.926183 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbblr\" (UniqueName: \"kubernetes.io/projected/3867b631-e44d-4ac0-96fa-e5de587cd933-kube-api-access-jbblr\") on node \"crc\" DevicePath \"\"" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.007050 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3867b631-e44d-4ac0-96fa-e5de587cd933" path="/var/lib/kubelet/pods/3867b631-e44d-4ac0-96fa-e5de587cd933/volumes" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.286872 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hj8n5/crc-debug-999ck"] Nov 28 12:07:10 crc kubenswrapper[4772]: E1128 12:07:10.287517 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3867b631-e44d-4ac0-96fa-e5de587cd933" containerName="container-00" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.287542 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3867b631-e44d-4ac0-96fa-e5de587cd933" containerName="container-00" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.287947 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3867b631-e44d-4ac0-96fa-e5de587cd933" containerName="container-00" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.288955 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/crc-debug-999ck" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.435625 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a12538e7-2cfc-43c6-aa78-eaa636db7f22-host\") pod \"crc-debug-999ck\" (UID: \"a12538e7-2cfc-43c6-aa78-eaa636db7f22\") " pod="openshift-must-gather-hj8n5/crc-debug-999ck" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.436001 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmbzl\" (UniqueName: \"kubernetes.io/projected/a12538e7-2cfc-43c6-aa78-eaa636db7f22-kube-api-access-qmbzl\") pod \"crc-debug-999ck\" (UID: \"a12538e7-2cfc-43c6-aa78-eaa636db7f22\") " pod="openshift-must-gather-hj8n5/crc-debug-999ck" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.538050 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a12538e7-2cfc-43c6-aa78-eaa636db7f22-host\") pod \"crc-debug-999ck\" (UID: \"a12538e7-2cfc-43c6-aa78-eaa636db7f22\") " pod="openshift-must-gather-hj8n5/crc-debug-999ck" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.538204 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a12538e7-2cfc-43c6-aa78-eaa636db7f22-host\") pod \"crc-debug-999ck\" (UID: \"a12538e7-2cfc-43c6-aa78-eaa636db7f22\") " pod="openshift-must-gather-hj8n5/crc-debug-999ck" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.538215 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmbzl\" (UniqueName: \"kubernetes.io/projected/a12538e7-2cfc-43c6-aa78-eaa636db7f22-kube-api-access-qmbzl\") pod \"crc-debug-999ck\" (UID: \"a12538e7-2cfc-43c6-aa78-eaa636db7f22\") " pod="openshift-must-gather-hj8n5/crc-debug-999ck" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.549949 4772 scope.go:117] "RemoveContainer" containerID="22695375b583bcb18e06e379e1ed552c0ae67790a068bccfc22e492b049fd474" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.549972 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/crc-debug-bj5vx" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.561212 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmbzl\" (UniqueName: \"kubernetes.io/projected/a12538e7-2cfc-43c6-aa78-eaa636db7f22-kube-api-access-qmbzl\") pod \"crc-debug-999ck\" (UID: \"a12538e7-2cfc-43c6-aa78-eaa636db7f22\") " pod="openshift-must-gather-hj8n5/crc-debug-999ck" Nov 28 12:07:10 crc kubenswrapper[4772]: I1128 12:07:10.606406 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/crc-debug-999ck" Nov 28 12:07:10 crc kubenswrapper[4772]: W1128 12:07:10.664806 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda12538e7_2cfc_43c6_aa78_eaa636db7f22.slice/crio-7d05892586b9821756bf7fa09625a5c82de3af087b2b19f4de052c3c223559f2 WatchSource:0}: Error finding container 7d05892586b9821756bf7fa09625a5c82de3af087b2b19f4de052c3c223559f2: Status 404 returned error can't find the container with id 7d05892586b9821756bf7fa09625a5c82de3af087b2b19f4de052c3c223559f2 Nov 28 12:07:11 crc kubenswrapper[4772]: I1128 12:07:11.564109 4772 generic.go:334] "Generic (PLEG): container finished" podID="a12538e7-2cfc-43c6-aa78-eaa636db7f22" containerID="71147e9216eec7ce00f4b74132bc7183da0d2d2fc2134cacf712260c1d730f3c" exitCode=0 Nov 28 12:07:11 crc kubenswrapper[4772]: I1128 12:07:11.564184 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hj8n5/crc-debug-999ck" event={"ID":"a12538e7-2cfc-43c6-aa78-eaa636db7f22","Type":"ContainerDied","Data":"71147e9216eec7ce00f4b74132bc7183da0d2d2fc2134cacf712260c1d730f3c"} Nov 28 12:07:11 crc kubenswrapper[4772]: I1128 12:07:11.564715 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hj8n5/crc-debug-999ck" event={"ID":"a12538e7-2cfc-43c6-aa78-eaa636db7f22","Type":"ContainerStarted","Data":"7d05892586b9821756bf7fa09625a5c82de3af087b2b19f4de052c3c223559f2"} Nov 28 12:07:11 crc kubenswrapper[4772]: I1128 12:07:11.623481 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hj8n5/crc-debug-999ck"] Nov 28 12:07:11 crc kubenswrapper[4772]: I1128 12:07:11.631964 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hj8n5/crc-debug-999ck"] Nov 28 12:07:12 crc kubenswrapper[4772]: I1128 12:07:12.692660 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/crc-debug-999ck" Nov 28 12:07:12 crc kubenswrapper[4772]: I1128 12:07:12.783933 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmbzl\" (UniqueName: \"kubernetes.io/projected/a12538e7-2cfc-43c6-aa78-eaa636db7f22-kube-api-access-qmbzl\") pod \"a12538e7-2cfc-43c6-aa78-eaa636db7f22\" (UID: \"a12538e7-2cfc-43c6-aa78-eaa636db7f22\") " Nov 28 12:07:12 crc kubenswrapper[4772]: I1128 12:07:12.784152 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a12538e7-2cfc-43c6-aa78-eaa636db7f22-host\") pod \"a12538e7-2cfc-43c6-aa78-eaa636db7f22\" (UID: \"a12538e7-2cfc-43c6-aa78-eaa636db7f22\") " Nov 28 12:07:12 crc kubenswrapper[4772]: I1128 12:07:12.784436 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a12538e7-2cfc-43c6-aa78-eaa636db7f22-host" (OuterVolumeSpecName: "host") pod "a12538e7-2cfc-43c6-aa78-eaa636db7f22" (UID: "a12538e7-2cfc-43c6-aa78-eaa636db7f22"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:07:12 crc kubenswrapper[4772]: I1128 12:07:12.784808 4772 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a12538e7-2cfc-43c6-aa78-eaa636db7f22-host\") on node \"crc\" DevicePath \"\"" Nov 28 12:07:12 crc kubenswrapper[4772]: I1128 12:07:12.798522 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12538e7-2cfc-43c6-aa78-eaa636db7f22-kube-api-access-qmbzl" (OuterVolumeSpecName: "kube-api-access-qmbzl") pod "a12538e7-2cfc-43c6-aa78-eaa636db7f22" (UID: "a12538e7-2cfc-43c6-aa78-eaa636db7f22"). InnerVolumeSpecName "kube-api-access-qmbzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:07:12 crc kubenswrapper[4772]: I1128 12:07:12.886654 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmbzl\" (UniqueName: \"kubernetes.io/projected/a12538e7-2cfc-43c6-aa78-eaa636db7f22-kube-api-access-qmbzl\") on node \"crc\" DevicePath \"\"" Nov 28 12:07:13 crc kubenswrapper[4772]: I1128 12:07:13.783016 4772 scope.go:117] "RemoveContainer" containerID="71147e9216eec7ce00f4b74132bc7183da0d2d2fc2134cacf712260c1d730f3c" Nov 28 12:07:13 crc kubenswrapper[4772]: I1128 12:07:13.783353 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/crc-debug-999ck" Nov 28 12:07:14 crc kubenswrapper[4772]: I1128 12:07:14.023231 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a12538e7-2cfc-43c6-aa78-eaa636db7f22" path="/var/lib/kubelet/pods/a12538e7-2cfc-43c6-aa78-eaa636db7f22/volumes" Nov 28 12:07:23 crc kubenswrapper[4772]: I1128 12:07:23.896151 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:07:23 crc kubenswrapper[4772]: I1128 12:07:23.897825 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:07:23 crc kubenswrapper[4772]: I1128 12:07:23.897940 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 12:07:23 crc kubenswrapper[4772]: I1128 12:07:23.898746 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4675f58701a5faa01b0838065d19af68e76bbebaa0a8983faa08ad0061f280e"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:07:23 crc kubenswrapper[4772]: I1128 12:07:23.898891 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://d4675f58701a5faa01b0838065d19af68e76bbebaa0a8983faa08ad0061f280e" gracePeriod=600 Nov 28 12:07:24 crc kubenswrapper[4772]: I1128 12:07:24.889044 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="d4675f58701a5faa01b0838065d19af68e76bbebaa0a8983faa08ad0061f280e" exitCode=0 Nov 28 12:07:24 crc kubenswrapper[4772]: I1128 12:07:24.889214 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"d4675f58701a5faa01b0838065d19af68e76bbebaa0a8983faa08ad0061f280e"} Nov 28 12:07:24 crc kubenswrapper[4772]: I1128 12:07:24.889679 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1"} Nov 28 12:07:24 crc kubenswrapper[4772]: I1128 12:07:24.889702 4772 scope.go:117] "RemoveContainer" containerID="babdf282770e53675885e1e2746f985047a0733ada25164cdb85564d15935bb1" Nov 28 12:07:27 crc kubenswrapper[4772]: I1128 12:07:27.305778 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-667fbdd95d-n4zbv_863244d7-6e70-4ac3-a7f1-485205de6c8e/barbican-api/0.log" Nov 28 12:07:27 crc kubenswrapper[4772]: I1128 12:07:27.384077 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-667fbdd95d-n4zbv_863244d7-6e70-4ac3-a7f1-485205de6c8e/barbican-api-log/0.log" Nov 28 12:07:27 crc kubenswrapper[4772]: I1128 12:07:27.468486 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-bd5f4f5b6-n56r2_c591ea97-2d66-45fe-85e2-1c22c6af8218/barbican-keystone-listener/0.log" Nov 28 12:07:27 crc kubenswrapper[4772]: I1128 12:07:27.554420 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-bd5f4f5b6-n56r2_c591ea97-2d66-45fe-85e2-1c22c6af8218/barbican-keystone-listener-log/0.log" Nov 28 12:07:27 crc kubenswrapper[4772]: I1128 12:07:27.643164 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86946c9dff-9km2x_8f72d166-9d59-443d-9af2-3d93c158ef98/barbican-worker-log/0.log" Nov 28 12:07:27 crc kubenswrapper[4772]: I1128 12:07:27.648073 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86946c9dff-9km2x_8f72d166-9d59-443d-9af2-3d93c158ef98/barbican-worker/0.log" Nov 28 12:07:27 crc kubenswrapper[4772]: I1128 12:07:27.803559 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt_ab57579b-65be-4ef3-977f-574ca00f3d9a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:27 crc kubenswrapper[4772]: I1128 12:07:27.852325 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3d55dc2a-8d2f-4f27-82ef-11744255c40c/ceilometer-central-agent/0.log" Nov 28 12:07:27 crc kubenswrapper[4772]: I1128 12:07:27.955905 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3d55dc2a-8d2f-4f27-82ef-11744255c40c/ceilometer-notification-agent/0.log" Nov 28 12:07:27 crc kubenswrapper[4772]: I1128 12:07:27.971314 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3d55dc2a-8d2f-4f27-82ef-11744255c40c/proxy-httpd/0.log" Nov 28 12:07:27 crc kubenswrapper[4772]: I1128 12:07:27.994939 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3d55dc2a-8d2f-4f27-82ef-11744255c40c/sg-core/0.log" Nov 28 12:07:28 crc kubenswrapper[4772]: I1128 12:07:28.181534 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_16a2193c-fc2c-489d-9aff-edfe826fdb75/cinder-api-log/0.log" Nov 28 12:07:28 crc kubenswrapper[4772]: I1128 12:07:28.209244 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_16a2193c-fc2c-489d-9aff-edfe826fdb75/cinder-api/0.log" Nov 28 12:07:28 crc kubenswrapper[4772]: I1128 12:07:28.287608 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c/cinder-scheduler/0.log" Nov 28 12:07:28 crc kubenswrapper[4772]: I1128 12:07:28.374643 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c/probe/0.log" Nov 28 12:07:28 crc kubenswrapper[4772]: I1128 12:07:28.433418 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-68zj7_9cf12a49-cbf2-4721-92a0-e4f9f88deb0c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:28 crc kubenswrapper[4772]: I1128 12:07:28.634937 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-shv6k_03f3a9a0-ccb5-41d6-8ba8-419fb9775213/init/0.log" Nov 28 12:07:28 crc kubenswrapper[4772]: I1128 12:07:28.640502 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j_b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:28 crc kubenswrapper[4772]: I1128 12:07:28.862071 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-shv6k_03f3a9a0-ccb5-41d6-8ba8-419fb9775213/dnsmasq-dns/0.log" Nov 28 12:07:28 crc kubenswrapper[4772]: I1128 12:07:28.873208 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-shv6k_03f3a9a0-ccb5-41d6-8ba8-419fb9775213/init/0.log" Nov 28 12:07:28 crc kubenswrapper[4772]: I1128 12:07:28.912428 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-dx99x_49998ed7-1acb-4812-af0d-822d07292334/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:29 crc kubenswrapper[4772]: I1128 12:07:29.082770 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_94aac27d-0e3b-431a-be6e-a88e1eeb16db/glance-httpd/0.log" Nov 28 12:07:29 crc kubenswrapper[4772]: I1128 12:07:29.189959 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_94aac27d-0e3b-431a-be6e-a88e1eeb16db/glance-log/0.log" Nov 28 12:07:29 crc kubenswrapper[4772]: I1128 12:07:29.314853 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_090e21bf-8aa4-47f0-99ce-d8225cdac91c/glance-log/0.log" Nov 28 12:07:29 crc kubenswrapper[4772]: I1128 12:07:29.333820 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_090e21bf-8aa4-47f0-99ce-d8225cdac91c/glance-httpd/0.log" Nov 28 12:07:29 crc kubenswrapper[4772]: I1128 12:07:29.475197 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f99664784-xpqjq_3a9ada7a-c788-41ad-87a6-431ba8c94394/horizon/0.log" Nov 28 12:07:29 crc kubenswrapper[4772]: I1128 12:07:29.663017 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-94trn_99b393cd-c63f-4007-9cb8-26a6e5710794/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:29 crc kubenswrapper[4772]: I1128 12:07:29.861100 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f99664784-xpqjq_3a9ada7a-c788-41ad-87a6-431ba8c94394/horizon-log/0.log" Nov 28 12:07:29 crc kubenswrapper[4772]: I1128 12:07:29.926378 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-78lv5_e65b46c0-b274-454b-b98d-b2425334abfd/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:30 crc kubenswrapper[4772]: I1128 12:07:30.145996 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29405521-g4n7r_058bf0b5-6899-4a5d-a098-e40b52cfd512/keystone-cron/0.log" Nov 28 12:07:30 crc kubenswrapper[4772]: I1128 12:07:30.164039 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-56987d8b67-lwl5z_473cc657-5696-4761-a692-e4929954d45b/keystone-api/0.log" Nov 28 12:07:30 crc kubenswrapper[4772]: I1128 12:07:30.372857 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3bfc712b-ffa2-4fc0-825c-def988a3f1b2/kube-state-metrics/0.log" Nov 28 12:07:30 crc kubenswrapper[4772]: I1128 12:07:30.434816 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp_0b865b7c-a1c7-4f0b-b289-d980f76a946d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:30 crc kubenswrapper[4772]: I1128 12:07:30.673784 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-758875cc6f-fmsqk_d251a047-3f90-4db3-8cae-b65b24395fdf/neutron-api/0.log" Nov 28 12:07:30 crc kubenswrapper[4772]: I1128 12:07:30.730382 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-758875cc6f-fmsqk_d251a047-3f90-4db3-8cae-b65b24395fdf/neutron-httpd/0.log" Nov 28 12:07:31 crc kubenswrapper[4772]: I1128 12:07:31.028201 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg_49725426-9a39-40a3-8921-dadd52884a4a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:31 crc kubenswrapper[4772]: I1128 12:07:31.422888 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_04cf07d5-2221-4a24-af67-730b13cd2021/nova-api-log/0.log" Nov 28 12:07:31 crc kubenswrapper[4772]: I1128 12:07:31.450209 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_367a81eb-3924-42ce-8fcf-258e2ad0b494/nova-cell0-conductor-conductor/0.log" Nov 28 12:07:31 crc kubenswrapper[4772]: I1128 12:07:31.619451 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_04cf07d5-2221-4a24-af67-730b13cd2021/nova-api-api/0.log" Nov 28 12:07:31 crc kubenswrapper[4772]: I1128 12:07:31.682114 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4e7b45c1-3ed2-4693-b75b-faf73867de92/nova-cell1-conductor-conductor/0.log" Nov 28 12:07:31 crc kubenswrapper[4772]: I1128 12:07:31.740573 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a/nova-cell1-novncproxy-novncproxy/0.log" Nov 28 12:07:31 crc kubenswrapper[4772]: I1128 12:07:31.875393 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-pl4v2_3bcf40ed-6681-4685-8277-c31e223c9686/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:32 crc kubenswrapper[4772]: I1128 12:07:32.058235 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_36168857-1f1a-48f9-8adb-53889086486e/nova-metadata-log/0.log" Nov 28 12:07:32 crc kubenswrapper[4772]: I1128 12:07:32.336190 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d68d6b7e-6515-4085-82db-aa7d361d06e6/nova-scheduler-scheduler/0.log" Nov 28 12:07:32 crc kubenswrapper[4772]: I1128 12:07:32.339041 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fbc92675-93c6-4d66-afb0-d83636cbf853/mysql-bootstrap/0.log" Nov 28 12:07:32 crc kubenswrapper[4772]: I1128 12:07:32.483914 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fbc92675-93c6-4d66-afb0-d83636cbf853/mysql-bootstrap/0.log" Nov 28 12:07:32 crc kubenswrapper[4772]: I1128 12:07:32.531845 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fbc92675-93c6-4d66-afb0-d83636cbf853/galera/0.log" Nov 28 12:07:32 crc kubenswrapper[4772]: I1128 12:07:32.692235 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20/mysql-bootstrap/0.log" Nov 28 12:07:32 crc kubenswrapper[4772]: I1128 12:07:32.920291 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20/galera/0.log" Nov 28 12:07:32 crc kubenswrapper[4772]: I1128 12:07:32.921575 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20/mysql-bootstrap/0.log" Nov 28 12:07:33 crc kubenswrapper[4772]: I1128 12:07:33.084475 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_36168857-1f1a-48f9-8adb-53889086486e/nova-metadata-metadata/0.log" Nov 28 12:07:33 crc kubenswrapper[4772]: I1128 12:07:33.157748 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_90047d06-b7f8-416f-a4f1-6f76b5b94f39/openstackclient/0.log" Nov 28 12:07:33 crc kubenswrapper[4772]: I1128 12:07:33.196650 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9xz5n_620b05d3-f04b-4d52-b3c3-039dcc751696/openstack-network-exporter/0.log" Nov 28 12:07:33 crc kubenswrapper[4772]: I1128 12:07:33.350576 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gjbm2_dbdbf695-f81d-431b-8330-6745cdbf9ab1/ovsdb-server-init/0.log" Nov 28 12:07:33 crc kubenswrapper[4772]: I1128 12:07:33.532642 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gjbm2_dbdbf695-f81d-431b-8330-6745cdbf9ab1/ovsdb-server-init/0.log" Nov 28 12:07:33 crc kubenswrapper[4772]: I1128 12:07:33.541093 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gjbm2_dbdbf695-f81d-431b-8330-6745cdbf9ab1/ovs-vswitchd/0.log" Nov 28 12:07:33 crc kubenswrapper[4772]: I1128 12:07:33.542809 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gjbm2_dbdbf695-f81d-431b-8330-6745cdbf9ab1/ovsdb-server/0.log" Nov 28 12:07:33 crc kubenswrapper[4772]: I1128 12:07:33.702990 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qzrrh_42fee486-89c9-4f0e-9db6-ac695b62a588/ovn-controller/0.log" Nov 28 12:07:33 crc kubenswrapper[4772]: I1128 12:07:33.798568 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9lxg4_9b6d18b1-abcf-4c1f-b811-6df572185255/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:33 crc kubenswrapper[4772]: I1128 12:07:33.918092 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6a9655ce-0d05-449e-899d-6cbaa25cd5e9/openstack-network-exporter/0.log" Nov 28 12:07:33 crc kubenswrapper[4772]: I1128 12:07:33.992548 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6a9655ce-0d05-449e-899d-6cbaa25cd5e9/ovn-northd/0.log" Nov 28 12:07:34 crc kubenswrapper[4772]: I1128 12:07:34.083639 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_934fc42a-8a76-4b95-9ad0-5e13fa47d1cb/openstack-network-exporter/0.log" Nov 28 12:07:34 crc kubenswrapper[4772]: I1128 12:07:34.088296 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_934fc42a-8a76-4b95-9ad0-5e13fa47d1cb/ovsdbserver-nb/0.log" Nov 28 12:07:34 crc kubenswrapper[4772]: I1128 12:07:34.246895 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ec35ef2e-e0de-4d42-9ab0-35033d549ac9/ovsdbserver-sb/0.log" Nov 28 12:07:34 crc kubenswrapper[4772]: I1128 12:07:34.312118 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ec35ef2e-e0de-4d42-9ab0-35033d549ac9/openstack-network-exporter/0.log" Nov 28 12:07:34 crc kubenswrapper[4772]: I1128 12:07:34.541260 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74fc54dcd4-9z4wp_0c712f4c-0d11-4e33-a725-4a5ec8f62c5f/placement-api/0.log" Nov 28 12:07:34 crc kubenswrapper[4772]: I1128 12:07:34.543809 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74fc54dcd4-9z4wp_0c712f4c-0d11-4e33-a725-4a5ec8f62c5f/placement-log/0.log" Nov 28 12:07:34 crc kubenswrapper[4772]: I1128 12:07:34.549191 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9e3b8854-8b5b-441d-97a7-12e48cffafb6/setup-container/0.log" Nov 28 12:07:34 crc kubenswrapper[4772]: I1128 12:07:34.736504 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9e3b8854-8b5b-441d-97a7-12e48cffafb6/setup-container/0.log" Nov 28 12:07:34 crc kubenswrapper[4772]: I1128 12:07:34.784622 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1a8859cb-c89a-4d2c-ac6b-6abd31388e61/setup-container/0.log" Nov 28 12:07:34 crc kubenswrapper[4772]: I1128 12:07:34.834566 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9e3b8854-8b5b-441d-97a7-12e48cffafb6/rabbitmq/0.log" Nov 28 12:07:34 crc kubenswrapper[4772]: I1128 12:07:34.959594 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1a8859cb-c89a-4d2c-ac6b-6abd31388e61/setup-container/0.log" Nov 28 12:07:34 crc kubenswrapper[4772]: I1128 12:07:34.962367 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1a8859cb-c89a-4d2c-ac6b-6abd31388e61/rabbitmq/0.log" Nov 28 12:07:35 crc kubenswrapper[4772]: I1128 12:07:35.089836 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp_02da511d-7da9-49de-bed3-34ecbf58b864/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:35 crc kubenswrapper[4772]: I1128 12:07:35.137632 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-dkl6g_62194471-8bd2-46f1-9891-e2bfe7cf8d67/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:35 crc kubenswrapper[4772]: I1128 12:07:35.270597 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw_8a8788a2-2374-4b48-b5c6-f6ab77a21711/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:35 crc kubenswrapper[4772]: I1128 12:07:35.431848 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-mxktm_a262a424-caf0-4d6e-95e6-c0ca5ff2473b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:35 crc kubenswrapper[4772]: I1128 12:07:35.569683 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wwp4z_8e844804-6f77-4b6c-93c7-cdc083c5673d/ssh-known-hosts-edpm-deployment/0.log" Nov 28 12:07:35 crc kubenswrapper[4772]: I1128 12:07:35.779384 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d5fc6594c-kj2rm_4c10f0c5-2315-469e-bda3-d3b66ab776e6/proxy-server/0.log" Nov 28 12:07:35 crc kubenswrapper[4772]: I1128 12:07:35.789655 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d5fc6594c-kj2rm_4c10f0c5-2315-469e-bda3-d3b66ab776e6/proxy-httpd/0.log" Nov 28 12:07:35 crc kubenswrapper[4772]: I1128 12:07:35.897883 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wmtrl_1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6/swift-ring-rebalance/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.035746 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/account-auditor/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.084633 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/account-reaper/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.146468 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/account-replicator/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.216914 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/account-server/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.252933 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/container-auditor/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.281969 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/container-replicator/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.338722 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/container-server/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.452474 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/object-expirer/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.454627 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/container-updater/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.474425 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/object-auditor/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.557522 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/object-replicator/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.634399 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/object-updater/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.703976 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/rsync/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.704155 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/object-server/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.813503 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/swift-recon-cron/0.log" Nov 28 12:07:36 crc kubenswrapper[4772]: I1128 12:07:36.994975 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-9srrh_bed1e099-94a0-45ab-9686-4488e1df9252/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:37 crc kubenswrapper[4772]: I1128 12:07:37.027064 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_39592588-10c2-45fd-88fb-cb63f200c871/tempest-tests-tempest-tests-runner/0.log" Nov 28 12:07:37 crc kubenswrapper[4772]: I1128 12:07:37.227895 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3e1800cb-ecd9-443b-b9a9-6437d8abbfc7/test-operator-logs-container/0.log" Nov 28 12:07:37 crc kubenswrapper[4772]: I1128 12:07:37.278988 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-t78xp_553d598e-4476-450b-952b-f8269626bfa5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:07:45 crc kubenswrapper[4772]: I1128 12:07:45.473184 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f9e491cd-f369-412c-9b41-77844ff3057d/memcached/0.log" Nov 28 12:08:00 crc kubenswrapper[4772]: I1128 12:08:00.740608 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/util/0.log" Nov 28 12:08:00 crc kubenswrapper[4772]: I1128 12:08:00.893719 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/pull/0.log" Nov 28 12:08:00 crc kubenswrapper[4772]: I1128 12:08:00.917070 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/util/0.log" Nov 28 12:08:00 crc kubenswrapper[4772]: I1128 12:08:00.930978 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/pull/0.log" Nov 28 12:08:01 crc kubenswrapper[4772]: I1128 12:08:01.051722 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/util/0.log" Nov 28 12:08:01 crc kubenswrapper[4772]: I1128 12:08:01.102776 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/extract/0.log" Nov 28 12:08:01 crc kubenswrapper[4772]: I1128 12:08:01.154660 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/pull/0.log" Nov 28 12:08:01 crc kubenswrapper[4772]: I1128 12:08:01.227624 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-8n59v_c063126d-a9d6-4a2c-96b4-0b0a42a94fff/kube-rbac-proxy/0.log" Nov 28 12:08:01 crc kubenswrapper[4772]: I1128 12:08:01.426692 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-8n59v_c063126d-a9d6-4a2c-96b4-0b0a42a94fff/manager/0.log" Nov 28 12:08:01 crc kubenswrapper[4772]: I1128 12:08:01.516255 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-zkr58_e98df0ac-d8d5-49fd-a331-509b0736bbb1/kube-rbac-proxy/0.log" Nov 28 12:08:01 crc kubenswrapper[4772]: I1128 12:08:01.560812 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-zkr58_e98df0ac-d8d5-49fd-a331-509b0736bbb1/manager/0.log" Nov 28 12:08:01 crc kubenswrapper[4772]: I1128 12:08:01.661345 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-kwb9r_5278900b-7407-46c5-b420-c5569e508132/kube-rbac-proxy/0.log" Nov 28 12:08:01 crc kubenswrapper[4772]: I1128 12:08:01.685132 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-kwb9r_5278900b-7407-46c5-b420-c5569e508132/manager/0.log" Nov 28 12:08:01 crc kubenswrapper[4772]: I1128 12:08:01.907576 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-7spr8_7657168c-6a48-435a-92c3-b93970b60d07/kube-rbac-proxy/0.log" Nov 28 12:08:01 crc kubenswrapper[4772]: I1128 12:08:01.944108 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-7spr8_7657168c-6a48-435a-92c3-b93970b60d07/manager/0.log" Nov 28 12:08:01 crc kubenswrapper[4772]: I1128 12:08:01.991334 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-6r566_7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb/kube-rbac-proxy/0.log" Nov 28 12:08:02 crc kubenswrapper[4772]: I1128 12:08:02.071556 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-6r566_7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb/manager/0.log" Nov 28 12:08:02 crc kubenswrapper[4772]: I1128 12:08:02.136764 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-qksc2_b0c3e372-422f-46e4-94e3-51ed4b3c0fd0/kube-rbac-proxy/0.log" Nov 28 12:08:02 crc kubenswrapper[4772]: I1128 12:08:02.194006 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-qksc2_b0c3e372-422f-46e4-94e3-51ed4b3c0fd0/manager/0.log" Nov 28 12:08:02 crc kubenswrapper[4772]: I1128 12:08:02.299078 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-c9nnk_90486ac7-ac7e-418a-9f2a-5bf934e996ca/kube-rbac-proxy/0.log" Nov 28 12:08:02 crc kubenswrapper[4772]: I1128 12:08:02.455779 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-c9nnk_90486ac7-ac7e-418a-9f2a-5bf934e996ca/manager/0.log" Nov 28 12:08:02 crc kubenswrapper[4772]: I1128 12:08:02.504279 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-tj9m8_c29a1c46-5112-4d85-8f8f-b494575bd428/kube-rbac-proxy/0.log" Nov 28 12:08:02 crc kubenswrapper[4772]: I1128 12:08:02.515729 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-tj9m8_c29a1c46-5112-4d85-8f8f-b494575bd428/manager/0.log" Nov 28 12:08:02 crc kubenswrapper[4772]: I1128 12:08:02.727397 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-ftntr_f569792f-b95e-4f7a-b58e-22bd27c56dfd/kube-rbac-proxy/0.log" Nov 28 12:08:02 crc kubenswrapper[4772]: I1128 12:08:02.732992 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-ftntr_f569792f-b95e-4f7a-b58e-22bd27c56dfd/manager/0.log" Nov 28 12:08:02 crc kubenswrapper[4772]: I1128 12:08:02.855993 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-2w6xw_d117b1a7-48be-4cc5-928f-b22d31a16b7f/kube-rbac-proxy/0.log" Nov 28 12:08:02 crc kubenswrapper[4772]: I1128 12:08:02.906336 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-2w6xw_d117b1a7-48be-4cc5-928f-b22d31a16b7f/manager/0.log" Nov 28 12:08:02 crc kubenswrapper[4772]: I1128 12:08:02.921136 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-nfb47_2dc36b3d-99ac-4a89-bdc3-309a12cc887e/kube-rbac-proxy/0.log" Nov 28 12:08:03 crc kubenswrapper[4772]: I1128 12:08:03.047012 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-nfb47_2dc36b3d-99ac-4a89-bdc3-309a12cc887e/manager/0.log" Nov 28 12:08:03 crc kubenswrapper[4772]: I1128 12:08:03.097946 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-l8rvz_f67d3c6d-0b62-4162-bfeb-24da441f5edc/kube-rbac-proxy/0.log" Nov 28 12:08:03 crc kubenswrapper[4772]: I1128 12:08:03.159889 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-l8rvz_f67d3c6d-0b62-4162-bfeb-24da441f5edc/manager/0.log" Nov 28 12:08:03 crc kubenswrapper[4772]: I1128 12:08:03.262650 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-sc4xf_a7f0f276-5402-4e33-bd63-f6df7819f966/kube-rbac-proxy/0.log" Nov 28 12:08:03 crc kubenswrapper[4772]: I1128 12:08:03.347248 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-sc4xf_a7f0f276-5402-4e33-bd63-f6df7819f966/manager/0.log" Nov 28 12:08:03 crc kubenswrapper[4772]: I1128 12:08:03.433095 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-k7z7p_7f63617e-c125-40e3-a273-4180f7d8d45c/manager/0.log" Nov 28 12:08:03 crc kubenswrapper[4772]: I1128 12:08:03.439869 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-k7z7p_7f63617e-c125-40e3-a273-4180f7d8d45c/kube-rbac-proxy/0.log" Nov 28 12:08:03 crc kubenswrapper[4772]: I1128 12:08:03.561065 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs_47a38974-64f9-46ba-b4cf-f61c0d3a485e/kube-rbac-proxy/0.log" Nov 28 12:08:03 crc kubenswrapper[4772]: I1128 12:08:03.621833 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs_47a38974-64f9-46ba-b4cf-f61c0d3a485e/manager/0.log" Nov 28 12:08:03 crc kubenswrapper[4772]: I1128 12:08:03.923188 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-g4wlz_7cc8caa8-c50f-432d-9d19-8b3f1603d90a/registry-server/0.log" Nov 28 12:08:03 crc kubenswrapper[4772]: I1128 12:08:03.998344 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7f586794b9-rndfk_b4035afc-5f40-4643-8f3c-39a68fe3efa6/operator/0.log" Nov 28 12:08:04 crc kubenswrapper[4772]: I1128 12:08:04.155190 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-jpf2b_e955b059-294d-40ab-b4af-6bbf7c5bb2e6/kube-rbac-proxy/0.log" Nov 28 12:08:04 crc kubenswrapper[4772]: I1128 12:08:04.274795 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-jpf2b_e955b059-294d-40ab-b4af-6bbf7c5bb2e6/manager/0.log" Nov 28 12:08:04 crc kubenswrapper[4772]: I1128 12:08:04.392002 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-2mc7h_2ce66d6e-19b8-41e7-890d-f17f4be5a920/kube-rbac-proxy/0.log" Nov 28 12:08:04 crc kubenswrapper[4772]: I1128 12:08:04.423284 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-2mc7h_2ce66d6e-19b8-41e7-890d-f17f4be5a920/manager/0.log" Nov 28 12:08:04 crc kubenswrapper[4772]: I1128 12:08:04.533855 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ldrjz_6e95de97-8ad3-493a-a98f-5541e23ca701/operator/0.log" Nov 28 12:08:04 crc kubenswrapper[4772]: I1128 12:08:04.702169 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-pqx6v_79791884-38fa-4d4e-ace2-cd02b0df26ab/kube-rbac-proxy/0.log" Nov 28 12:08:04 crc kubenswrapper[4772]: I1128 12:08:04.732108 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-pqx6v_79791884-38fa-4d4e-ace2-cd02b0df26ab/manager/0.log" Nov 28 12:08:04 crc kubenswrapper[4772]: I1128 12:08:04.904676 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-cxg56_9632aabc-46f3-44f3-b6ff-01923cddd5fa/kube-rbac-proxy/0.log" Nov 28 12:08:04 crc kubenswrapper[4772]: I1128 12:08:04.937219 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6fbf799579-db6rg_ed05bf4d-d7d9-40eb-965a-5c866fc76b3c/manager/0.log" Nov 28 12:08:04 crc kubenswrapper[4772]: I1128 12:08:04.998527 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-cxg56_9632aabc-46f3-44f3-b6ff-01923cddd5fa/manager/0.log" Nov 28 12:08:05 crc kubenswrapper[4772]: I1128 12:08:05.077373 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-956cf_7b6bce9b-9e9a-414a-aad7-5a8667c9557d/kube-rbac-proxy/0.log" Nov 28 12:08:05 crc kubenswrapper[4772]: I1128 12:08:05.136333 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-956cf_7b6bce9b-9e9a-414a-aad7-5a8667c9557d/manager/0.log" Nov 28 12:08:05 crc kubenswrapper[4772]: I1128 12:08:05.175067 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-cfl4r_9d009ed5-21d1-4f1c-b1ec-bef39cf8a265/manager/0.log" Nov 28 12:08:05 crc kubenswrapper[4772]: I1128 12:08:05.176182 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-cfl4r_9d009ed5-21d1-4f1c-b1ec-bef39cf8a265/kube-rbac-proxy/0.log" Nov 28 12:08:23 crc kubenswrapper[4772]: I1128 12:08:23.696916 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-w8l8k_8b29fa00-3205-4f7c-8f5f-671c7921029b/control-plane-machine-set-operator/0.log" Nov 28 12:08:23 crc kubenswrapper[4772]: I1128 12:08:23.859908 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4ct44_d593cf3a-ced7-4f3a-a15a-10c3309a2ee3/machine-api-operator/0.log" Nov 28 12:08:23 crc kubenswrapper[4772]: I1128 12:08:23.873591 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4ct44_d593cf3a-ced7-4f3a-a15a-10c3309a2ee3/kube-rbac-proxy/0.log" Nov 28 12:08:36 crc kubenswrapper[4772]: I1128 12:08:36.766626 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-hxhpx_9aab53d9-b682-4d53-8d5e-8fd0498411e6/cert-manager-controller/0.log" Nov 28 12:08:36 crc kubenswrapper[4772]: I1128 12:08:36.923099 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-tmmls_ee885bfb-fd81-472f-8de7-2a64130e0141/cert-manager-cainjector/0.log" Nov 28 12:08:36 crc kubenswrapper[4772]: I1128 12:08:36.959166 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-cq45f_e3fe3585-4df1-4fdc-ab0f-7c9c4ed0e6de/cert-manager-webhook/0.log" Nov 28 12:08:49 crc kubenswrapper[4772]: I1128 12:08:49.489517 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-7glrw_dca0040a-69ac-4ff1-aefa-64b17329697f/nmstate-console-plugin/0.log" Nov 28 12:08:49 crc kubenswrapper[4772]: I1128 12:08:49.682614 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qqs48_1bec0d53-d3cb-497c-8db1-646796c7194c/nmstate-handler/0.log" Nov 28 12:08:49 crc kubenswrapper[4772]: I1128 12:08:49.769476 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-zj2qc_ee79d6cb-a211-42c9-b669-9e202376834a/kube-rbac-proxy/0.log" Nov 28 12:08:49 crc kubenswrapper[4772]: I1128 12:08:49.774626 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-zj2qc_ee79d6cb-a211-42c9-b669-9e202376834a/nmstate-metrics/0.log" Nov 28 12:08:49 crc kubenswrapper[4772]: I1128 12:08:49.919043 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-2mwrb_e1c70388-5a93-4972-ab1f-24e87ab8498e/nmstate-operator/0.log" Nov 28 12:08:49 crc kubenswrapper[4772]: I1128 12:08:49.992479 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-h6rxp_c53fcd9d-4c6c-4829-8caa-3cddd7c60442/nmstate-webhook/0.log" Nov 28 12:09:06 crc kubenswrapper[4772]: I1128 12:09:06.151896 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-pndk9_e2ea9b50-b6aa-4700-b238-a66b47d5d070/kube-rbac-proxy/0.log" Nov 28 12:09:06 crc kubenswrapper[4772]: I1128 12:09:06.313440 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-pndk9_e2ea9b50-b6aa-4700-b238-a66b47d5d070/controller/0.log" Nov 28 12:09:06 crc kubenswrapper[4772]: I1128 12:09:06.446069 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-frr-files/0.log" Nov 28 12:09:06 crc kubenswrapper[4772]: I1128 12:09:06.581477 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-frr-files/0.log" Nov 28 12:09:06 crc kubenswrapper[4772]: I1128 12:09:06.614798 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-reloader/0.log" Nov 28 12:09:06 crc kubenswrapper[4772]: I1128 12:09:06.620185 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-metrics/0.log" Nov 28 12:09:06 crc kubenswrapper[4772]: I1128 12:09:06.679260 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-reloader/0.log" Nov 28 12:09:06 crc kubenswrapper[4772]: I1128 12:09:06.806706 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-reloader/0.log" Nov 28 12:09:06 crc kubenswrapper[4772]: I1128 12:09:06.827797 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-frr-files/0.log" Nov 28 12:09:06 crc kubenswrapper[4772]: I1128 12:09:06.840031 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-metrics/0.log" Nov 28 12:09:06 crc kubenswrapper[4772]: I1128 12:09:06.874249 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-metrics/0.log" Nov 28 12:09:07 crc kubenswrapper[4772]: I1128 12:09:07.009576 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-frr-files/0.log" Nov 28 12:09:07 crc kubenswrapper[4772]: I1128 12:09:07.010921 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-reloader/0.log" Nov 28 12:09:07 crc kubenswrapper[4772]: I1128 12:09:07.053084 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/controller/0.log" Nov 28 12:09:07 crc kubenswrapper[4772]: I1128 12:09:07.061693 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-metrics/0.log" Nov 28 12:09:07 crc kubenswrapper[4772]: I1128 12:09:07.263869 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/frr-metrics/0.log" Nov 28 12:09:07 crc kubenswrapper[4772]: I1128 12:09:07.311538 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/kube-rbac-proxy-frr/0.log" Nov 28 12:09:07 crc kubenswrapper[4772]: I1128 12:09:07.314951 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/kube-rbac-proxy/0.log" Nov 28 12:09:07 crc kubenswrapper[4772]: I1128 12:09:07.469975 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/reloader/0.log" Nov 28 12:09:07 crc kubenswrapper[4772]: I1128 12:09:07.520320 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-h5pwr_755f7720-2965-444a-887c-b4ab39b4160f/frr-k8s-webhook-server/0.log" Nov 28 12:09:07 crc kubenswrapper[4772]: I1128 12:09:07.734332 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-656496c9ff-4ql2l_b9e32536-fc25-4e7d-8361-41e61fd188f4/manager/0.log" Nov 28 12:09:07 crc kubenswrapper[4772]: I1128 12:09:07.922838 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b6fc5bcd6-krdrb_2c08f594-245d-4b59-9890-c8277ce4229f/webhook-server/0.log" Nov 28 12:09:08 crc kubenswrapper[4772]: I1128 12:09:08.034966 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-c9tgf_175603eb-4244-46dc-98a4-2f8426488c48/kube-rbac-proxy/0.log" Nov 28 12:09:08 crc kubenswrapper[4772]: I1128 12:09:08.500834 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-c9tgf_175603eb-4244-46dc-98a4-2f8426488c48/speaker/0.log" Nov 28 12:09:08 crc kubenswrapper[4772]: I1128 12:09:08.513729 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/frr/0.log" Nov 28 12:09:22 crc kubenswrapper[4772]: I1128 12:09:22.602493 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/util/0.log" Nov 28 12:09:22 crc kubenswrapper[4772]: I1128 12:09:22.808606 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/util/0.log" Nov 28 12:09:22 crc kubenswrapper[4772]: I1128 12:09:22.847148 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/pull/0.log" Nov 28 12:09:22 crc kubenswrapper[4772]: I1128 12:09:22.852719 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/pull/0.log" Nov 28 12:09:23 crc kubenswrapper[4772]: I1128 12:09:23.177220 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/util/0.log" Nov 28 12:09:23 crc kubenswrapper[4772]: I1128 12:09:23.226731 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/extract/0.log" Nov 28 12:09:23 crc kubenswrapper[4772]: I1128 12:09:23.240839 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/pull/0.log" Nov 28 12:09:23 crc kubenswrapper[4772]: I1128 12:09:23.364691 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/util/0.log" Nov 28 12:09:23 crc kubenswrapper[4772]: I1128 12:09:23.511061 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/util/0.log" Nov 28 12:09:23 crc kubenswrapper[4772]: I1128 12:09:23.527637 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/pull/0.log" Nov 28 12:09:23 crc kubenswrapper[4772]: I1128 12:09:23.528223 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/pull/0.log" Nov 28 12:09:23 crc kubenswrapper[4772]: I1128 12:09:23.717232 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/pull/0.log" Nov 28 12:09:23 crc kubenswrapper[4772]: I1128 12:09:23.779801 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/util/0.log" Nov 28 12:09:23 crc kubenswrapper[4772]: I1128 12:09:23.794984 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/extract/0.log" Nov 28 12:09:23 crc kubenswrapper[4772]: I1128 12:09:23.906026 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/extract-utilities/0.log" Nov 28 12:09:24 crc kubenswrapper[4772]: I1128 12:09:24.069159 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/extract-utilities/0.log" Nov 28 12:09:24 crc kubenswrapper[4772]: I1128 12:09:24.072321 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/extract-content/0.log" Nov 28 12:09:24 crc kubenswrapper[4772]: I1128 12:09:24.160718 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/extract-content/0.log" Nov 28 12:09:24 crc kubenswrapper[4772]: I1128 12:09:24.262788 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/extract-content/0.log" Nov 28 12:09:24 crc kubenswrapper[4772]: I1128 12:09:24.272576 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/extract-utilities/0.log" Nov 28 12:09:24 crc kubenswrapper[4772]: I1128 12:09:24.432760 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/extract-utilities/0.log" Nov 28 12:09:24 crc kubenswrapper[4772]: I1128 12:09:24.690031 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/extract-utilities/0.log" Nov 28 12:09:24 crc kubenswrapper[4772]: I1128 12:09:24.714227 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/extract-content/0.log" Nov 28 12:09:24 crc kubenswrapper[4772]: I1128 12:09:24.737031 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/registry-server/0.log" Nov 28 12:09:24 crc kubenswrapper[4772]: I1128 12:09:24.780234 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/extract-content/0.log" Nov 28 12:09:24 crc kubenswrapper[4772]: I1128 12:09:24.917686 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/extract-utilities/0.log" Nov 28 12:09:24 crc kubenswrapper[4772]: I1128 12:09:24.958919 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/extract-content/0.log" Nov 28 12:09:25 crc kubenswrapper[4772]: I1128 12:09:25.158534 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4slqp_c6c49597-5e3b-44ab-9b76-cb54e6c65736/marketplace-operator/0.log" Nov 28 12:09:25 crc kubenswrapper[4772]: I1128 12:09:25.290035 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/extract-utilities/0.log" Nov 28 12:09:25 crc kubenswrapper[4772]: I1128 12:09:25.387629 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/registry-server/0.log" Nov 28 12:09:25 crc kubenswrapper[4772]: I1128 12:09:25.417209 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/extract-utilities/0.log" Nov 28 12:09:25 crc kubenswrapper[4772]: I1128 12:09:25.486000 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/extract-content/0.log" Nov 28 12:09:25 crc kubenswrapper[4772]: I1128 12:09:25.499820 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/extract-content/0.log" Nov 28 12:09:25 crc kubenswrapper[4772]: I1128 12:09:25.632626 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/extract-utilities/0.log" Nov 28 12:09:25 crc kubenswrapper[4772]: I1128 12:09:25.672848 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/extract-content/0.log" Nov 28 12:09:25 crc kubenswrapper[4772]: I1128 12:09:25.706673 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/registry-server/0.log" Nov 28 12:09:25 crc kubenswrapper[4772]: I1128 12:09:25.838412 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/extract-utilities/0.log" Nov 28 12:09:26 crc kubenswrapper[4772]: I1128 12:09:26.067440 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/extract-content/0.log" Nov 28 12:09:26 crc kubenswrapper[4772]: I1128 12:09:26.100008 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/extract-utilities/0.log" Nov 28 12:09:26 crc kubenswrapper[4772]: I1128 12:09:26.101183 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/extract-content/0.log" Nov 28 12:09:26 crc kubenswrapper[4772]: I1128 12:09:26.286192 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/extract-utilities/0.log" Nov 28 12:09:26 crc kubenswrapper[4772]: I1128 12:09:26.306975 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/extract-content/0.log" Nov 28 12:09:26 crc kubenswrapper[4772]: I1128 12:09:26.743408 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/registry-server/0.log" Nov 28 12:09:53 crc kubenswrapper[4772]: I1128 12:09:53.895977 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:09:53 crc kubenswrapper[4772]: I1128 12:09:53.896483 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:10:23 crc kubenswrapper[4772]: I1128 12:10:23.896975 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:10:23 crc kubenswrapper[4772]: I1128 12:10:23.897464 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:10:53 crc kubenswrapper[4772]: I1128 12:10:53.896178 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:10:53 crc kubenswrapper[4772]: I1128 12:10:53.896844 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:10:53 crc kubenswrapper[4772]: I1128 12:10:53.896920 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 12:10:53 crc kubenswrapper[4772]: I1128 12:10:53.898900 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:10:53 crc kubenswrapper[4772]: I1128 12:10:53.899037 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" gracePeriod=600 Nov 28 12:10:54 crc kubenswrapper[4772]: E1128 12:10:54.045223 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:10:55 crc kubenswrapper[4772]: I1128 12:10:55.024902 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" exitCode=0 Nov 28 12:10:55 crc kubenswrapper[4772]: I1128 12:10:55.024980 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1"} Nov 28 12:10:55 crc kubenswrapper[4772]: I1128 12:10:55.025919 4772 scope.go:117] "RemoveContainer" containerID="d4675f58701a5faa01b0838065d19af68e76bbebaa0a8983faa08ad0061f280e" Nov 28 12:10:55 crc kubenswrapper[4772]: I1128 12:10:55.026652 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:10:55 crc kubenswrapper[4772]: E1128 12:10:55.027030 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:11:05 crc kubenswrapper[4772]: I1128 12:11:05.153440 4772 generic.go:334] "Generic (PLEG): container finished" podID="611ce4cc-b675-496a-b265-189252ce3818" containerID="8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248" exitCode=0 Nov 28 12:11:05 crc kubenswrapper[4772]: I1128 12:11:05.153983 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hj8n5/must-gather-w24r7" event={"ID":"611ce4cc-b675-496a-b265-189252ce3818","Type":"ContainerDied","Data":"8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248"} Nov 28 12:11:05 crc kubenswrapper[4772]: I1128 12:11:05.154602 4772 scope.go:117] "RemoveContainer" containerID="8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248" Nov 28 12:11:05 crc kubenswrapper[4772]: I1128 12:11:05.916212 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hj8n5_must-gather-w24r7_611ce4cc-b675-496a-b265-189252ce3818/gather/0.log" Nov 28 12:11:09 crc kubenswrapper[4772]: I1128 12:11:09.994906 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:11:09 crc kubenswrapper[4772]: E1128 12:11:09.995967 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:11:13 crc kubenswrapper[4772]: I1128 12:11:13.658676 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hj8n5/must-gather-w24r7"] Nov 28 12:11:13 crc kubenswrapper[4772]: I1128 12:11:13.659431 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hj8n5/must-gather-w24r7" podUID="611ce4cc-b675-496a-b265-189252ce3818" containerName="copy" containerID="cri-o://337870245d3256bae378d4018ea10946dbaf5c3436c7523c887505612761dc45" gracePeriod=2 Nov 28 12:11:13 crc kubenswrapper[4772]: I1128 12:11:13.669241 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hj8n5/must-gather-w24r7"] Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.136537 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hj8n5_must-gather-w24r7_611ce4cc-b675-496a-b265-189252ce3818/copy/0.log" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.137205 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/must-gather-w24r7" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.215192 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx924\" (UniqueName: \"kubernetes.io/projected/611ce4cc-b675-496a-b265-189252ce3818-kube-api-access-xx924\") pod \"611ce4cc-b675-496a-b265-189252ce3818\" (UID: \"611ce4cc-b675-496a-b265-189252ce3818\") " Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.215270 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/611ce4cc-b675-496a-b265-189252ce3818-must-gather-output\") pod \"611ce4cc-b675-496a-b265-189252ce3818\" (UID: \"611ce4cc-b675-496a-b265-189252ce3818\") " Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.219930 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/611ce4cc-b675-496a-b265-189252ce3818-kube-api-access-xx924" (OuterVolumeSpecName: "kube-api-access-xx924") pod "611ce4cc-b675-496a-b265-189252ce3818" (UID: "611ce4cc-b675-496a-b265-189252ce3818"). InnerVolumeSpecName "kube-api-access-xx924". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.260752 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hj8n5_must-gather-w24r7_611ce4cc-b675-496a-b265-189252ce3818/copy/0.log" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.261230 4772 generic.go:334] "Generic (PLEG): container finished" podID="611ce4cc-b675-496a-b265-189252ce3818" containerID="337870245d3256bae378d4018ea10946dbaf5c3436c7523c887505612761dc45" exitCode=143 Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.261283 4772 scope.go:117] "RemoveContainer" containerID="337870245d3256bae378d4018ea10946dbaf5c3436c7523c887505612761dc45" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.261377 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hj8n5/must-gather-w24r7" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.318661 4772 scope.go:117] "RemoveContainer" containerID="8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.323391 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx924\" (UniqueName: \"kubernetes.io/projected/611ce4cc-b675-496a-b265-189252ce3818-kube-api-access-xx924\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.417531 4772 scope.go:117] "RemoveContainer" containerID="337870245d3256bae378d4018ea10946dbaf5c3436c7523c887505612761dc45" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.419769 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/611ce4cc-b675-496a-b265-189252ce3818-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "611ce4cc-b675-496a-b265-189252ce3818" (UID: "611ce4cc-b675-496a-b265-189252ce3818"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:11:14 crc kubenswrapper[4772]: E1128 12:11:14.420256 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"337870245d3256bae378d4018ea10946dbaf5c3436c7523c887505612761dc45\": container with ID starting with 337870245d3256bae378d4018ea10946dbaf5c3436c7523c887505612761dc45 not found: ID does not exist" containerID="337870245d3256bae378d4018ea10946dbaf5c3436c7523c887505612761dc45" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.420305 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"337870245d3256bae378d4018ea10946dbaf5c3436c7523c887505612761dc45"} err="failed to get container status \"337870245d3256bae378d4018ea10946dbaf5c3436c7523c887505612761dc45\": rpc error: code = NotFound desc = could not find container \"337870245d3256bae378d4018ea10946dbaf5c3436c7523c887505612761dc45\": container with ID starting with 337870245d3256bae378d4018ea10946dbaf5c3436c7523c887505612761dc45 not found: ID does not exist" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.420332 4772 scope.go:117] "RemoveContainer" containerID="8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248" Nov 28 12:11:14 crc kubenswrapper[4772]: E1128 12:11:14.421170 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248\": container with ID starting with 8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248 not found: ID does not exist" containerID="8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.421204 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248"} err="failed to get container status \"8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248\": rpc error: code = NotFound desc = could not find container \"8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248\": container with ID starting with 8398108d834b08fa6628a7d0b60df8367e49dc8186bad2e16e58b9106590a248 not found: ID does not exist" Nov 28 12:11:14 crc kubenswrapper[4772]: I1128 12:11:14.425765 4772 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/611ce4cc-b675-496a-b265-189252ce3818-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:16 crc kubenswrapper[4772]: I1128 12:11:16.004901 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="611ce4cc-b675-496a-b265-189252ce3818" path="/var/lib/kubelet/pods/611ce4cc-b675-496a-b265-189252ce3818/volumes" Nov 28 12:11:23 crc kubenswrapper[4772]: I1128 12:11:23.995205 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:11:23 crc kubenswrapper[4772]: E1128 12:11:23.996586 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:11:36 crc kubenswrapper[4772]: I1128 12:11:36.994590 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:11:36 crc kubenswrapper[4772]: E1128 12:11:36.995563 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:11:47 crc kubenswrapper[4772]: I1128 12:11:47.995331 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:11:47 crc kubenswrapper[4772]: E1128 12:11:47.999582 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:12:00 crc kubenswrapper[4772]: I1128 12:12:00.996739 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:12:00 crc kubenswrapper[4772]: E1128 12:12:00.997886 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:12:13 crc kubenswrapper[4772]: I1128 12:12:13.994287 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:12:13 crc kubenswrapper[4772]: E1128 12:12:13.995033 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:12:26 crc kubenswrapper[4772]: I1128 12:12:26.995487 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:12:26 crc kubenswrapper[4772]: E1128 12:12:26.996583 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:12:38 crc kubenswrapper[4772]: I1128 12:12:38.994039 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:12:38 crc kubenswrapper[4772]: E1128 12:12:38.995159 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:12:48 crc kubenswrapper[4772]: I1128 12:12:48.094070 4772 scope.go:117] "RemoveContainer" containerID="60525dc274f70561bc50763d6e87bb6910db54aa1f6e2679ab5c5b7d8fd03529" Nov 28 12:12:52 crc kubenswrapper[4772]: I1128 12:12:52.007408 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:12:52 crc kubenswrapper[4772]: E1128 12:12:52.008546 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:13:02 crc kubenswrapper[4772]: I1128 12:13:02.995976 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:13:02 crc kubenswrapper[4772]: E1128 12:13:02.997440 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:13:15 crc kubenswrapper[4772]: I1128 12:13:15.995023 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:13:15 crc kubenswrapper[4772]: E1128 12:13:15.995953 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:13:26 crc kubenswrapper[4772]: I1128 12:13:26.995404 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:13:26 crc kubenswrapper[4772]: E1128 12:13:26.996386 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:13:37 crc kubenswrapper[4772]: I1128 12:13:37.995055 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:13:37 crc kubenswrapper[4772]: E1128 12:13:37.996261 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.683888 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-chccj/must-gather-qh9h6"] Nov 28 12:13:41 crc kubenswrapper[4772]: E1128 12:13:41.685439 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611ce4cc-b675-496a-b265-189252ce3818" containerName="copy" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.685516 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="611ce4cc-b675-496a-b265-189252ce3818" containerName="copy" Nov 28 12:13:41 crc kubenswrapper[4772]: E1128 12:13:41.685583 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611ce4cc-b675-496a-b265-189252ce3818" containerName="gather" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.685637 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="611ce4cc-b675-496a-b265-189252ce3818" containerName="gather" Nov 28 12:13:41 crc kubenswrapper[4772]: E1128 12:13:41.685737 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12538e7-2cfc-43c6-aa78-eaa636db7f22" containerName="container-00" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.685800 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12538e7-2cfc-43c6-aa78-eaa636db7f22" containerName="container-00" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.686021 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="611ce4cc-b675-496a-b265-189252ce3818" containerName="gather" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.686098 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12538e7-2cfc-43c6-aa78-eaa636db7f22" containerName="container-00" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.686163 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="611ce4cc-b675-496a-b265-189252ce3818" containerName="copy" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.687107 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/must-gather-qh9h6" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.697163 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-chccj"/"kube-root-ca.crt" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.697172 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-chccj"/"openshift-service-ca.crt" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.715211 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-chccj/must-gather-qh9h6"] Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.794095 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnf6n\" (UniqueName: \"kubernetes.io/projected/9465913e-87c9-4d2e-bb14-b6571a93f5ef-kube-api-access-pnf6n\") pod \"must-gather-qh9h6\" (UID: \"9465913e-87c9-4d2e-bb14-b6571a93f5ef\") " pod="openshift-must-gather-chccj/must-gather-qh9h6" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.795055 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9465913e-87c9-4d2e-bb14-b6571a93f5ef-must-gather-output\") pod \"must-gather-qh9h6\" (UID: \"9465913e-87c9-4d2e-bb14-b6571a93f5ef\") " pod="openshift-must-gather-chccj/must-gather-qh9h6" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.896626 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnf6n\" (UniqueName: \"kubernetes.io/projected/9465913e-87c9-4d2e-bb14-b6571a93f5ef-kube-api-access-pnf6n\") pod \"must-gather-qh9h6\" (UID: \"9465913e-87c9-4d2e-bb14-b6571a93f5ef\") " pod="openshift-must-gather-chccj/must-gather-qh9h6" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.897201 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9465913e-87c9-4d2e-bb14-b6571a93f5ef-must-gather-output\") pod \"must-gather-qh9h6\" (UID: \"9465913e-87c9-4d2e-bb14-b6571a93f5ef\") " pod="openshift-must-gather-chccj/must-gather-qh9h6" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.897647 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9465913e-87c9-4d2e-bb14-b6571a93f5ef-must-gather-output\") pod \"must-gather-qh9h6\" (UID: \"9465913e-87c9-4d2e-bb14-b6571a93f5ef\") " pod="openshift-must-gather-chccj/must-gather-qh9h6" Nov 28 12:13:41 crc kubenswrapper[4772]: I1128 12:13:41.919770 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnf6n\" (UniqueName: \"kubernetes.io/projected/9465913e-87c9-4d2e-bb14-b6571a93f5ef-kube-api-access-pnf6n\") pod \"must-gather-qh9h6\" (UID: \"9465913e-87c9-4d2e-bb14-b6571a93f5ef\") " pod="openshift-must-gather-chccj/must-gather-qh9h6" Nov 28 12:13:42 crc kubenswrapper[4772]: I1128 12:13:42.004791 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/must-gather-qh9h6" Nov 28 12:13:42 crc kubenswrapper[4772]: I1128 12:13:42.449279 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-chccj/must-gather-qh9h6"] Nov 28 12:13:42 crc kubenswrapper[4772]: I1128 12:13:42.924606 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chccj/must-gather-qh9h6" event={"ID":"9465913e-87c9-4d2e-bb14-b6571a93f5ef","Type":"ContainerStarted","Data":"83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989"} Nov 28 12:13:42 crc kubenswrapper[4772]: I1128 12:13:42.924931 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chccj/must-gather-qh9h6" event={"ID":"9465913e-87c9-4d2e-bb14-b6571a93f5ef","Type":"ContainerStarted","Data":"fa495eef3d7abaee5a9e2c73963120a45a398ea742d0c82ea63d47817adab640"} Nov 28 12:13:43 crc kubenswrapper[4772]: I1128 12:13:43.936327 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chccj/must-gather-qh9h6" event={"ID":"9465913e-87c9-4d2e-bb14-b6571a93f5ef","Type":"ContainerStarted","Data":"bf4c4817959a672e621e7724d87af0257da62d3b557b2bb4d6e639be0de6d291"} Nov 28 12:13:43 crc kubenswrapper[4772]: I1128 12:13:43.959001 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-chccj/must-gather-qh9h6" podStartSLOduration=2.958977827 podStartE2EDuration="2.958977827s" podCreationTimestamp="2025-11-28 12:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:13:43.958422342 +0000 UTC m=+4022.281665629" watchObservedRunningTime="2025-11-28 12:13:43.958977827 +0000 UTC m=+4022.282221064" Nov 28 12:13:46 crc kubenswrapper[4772]: I1128 12:13:46.582149 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-chccj/crc-debug-2xx4g"] Nov 28 12:13:46 crc kubenswrapper[4772]: I1128 12:13:46.583704 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/crc-debug-2xx4g" Nov 28 12:13:46 crc kubenswrapper[4772]: I1128 12:13:46.586242 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-chccj"/"default-dockercfg-d2thh" Nov 28 12:13:46 crc kubenswrapper[4772]: I1128 12:13:46.727732 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad-host\") pod \"crc-debug-2xx4g\" (UID: \"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad\") " pod="openshift-must-gather-chccj/crc-debug-2xx4g" Nov 28 12:13:46 crc kubenswrapper[4772]: I1128 12:13:46.727801 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmsbs\" (UniqueName: \"kubernetes.io/projected/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad-kube-api-access-gmsbs\") pod \"crc-debug-2xx4g\" (UID: \"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad\") " pod="openshift-must-gather-chccj/crc-debug-2xx4g" Nov 28 12:13:46 crc kubenswrapper[4772]: I1128 12:13:46.830211 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad-host\") pod \"crc-debug-2xx4g\" (UID: \"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad\") " pod="openshift-must-gather-chccj/crc-debug-2xx4g" Nov 28 12:13:46 crc kubenswrapper[4772]: I1128 12:13:46.830297 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmsbs\" (UniqueName: \"kubernetes.io/projected/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad-kube-api-access-gmsbs\") pod \"crc-debug-2xx4g\" (UID: \"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad\") " pod="openshift-must-gather-chccj/crc-debug-2xx4g" Nov 28 12:13:46 crc kubenswrapper[4772]: I1128 12:13:46.830386 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad-host\") pod \"crc-debug-2xx4g\" (UID: \"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad\") " pod="openshift-must-gather-chccj/crc-debug-2xx4g" Nov 28 12:13:46 crc kubenswrapper[4772]: I1128 12:13:46.862105 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmsbs\" (UniqueName: \"kubernetes.io/projected/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad-kube-api-access-gmsbs\") pod \"crc-debug-2xx4g\" (UID: \"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad\") " pod="openshift-must-gather-chccj/crc-debug-2xx4g" Nov 28 12:13:46 crc kubenswrapper[4772]: I1128 12:13:46.904757 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/crc-debug-2xx4g" Nov 28 12:13:46 crc kubenswrapper[4772]: W1128 12:13:46.936944 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c3e43cb_38f1_43ab_9511_9b0b449ca8ad.slice/crio-a136aadb3b6a859fb9966ba20b2585ee84184e6a38b6d0f860ae9fef8172e906 WatchSource:0}: Error finding container a136aadb3b6a859fb9966ba20b2585ee84184e6a38b6d0f860ae9fef8172e906: Status 404 returned error can't find the container with id a136aadb3b6a859fb9966ba20b2585ee84184e6a38b6d0f860ae9fef8172e906 Nov 28 12:13:46 crc kubenswrapper[4772]: I1128 12:13:46.965725 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chccj/crc-debug-2xx4g" event={"ID":"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad","Type":"ContainerStarted","Data":"a136aadb3b6a859fb9966ba20b2585ee84184e6a38b6d0f860ae9fef8172e906"} Nov 28 12:13:47 crc kubenswrapper[4772]: I1128 12:13:47.979183 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chccj/crc-debug-2xx4g" event={"ID":"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad","Type":"ContainerStarted","Data":"aec3f00e0932a36a12b11011c73a78ca3852077892214a14105d8db593b52843"} Nov 28 12:13:47 crc kubenswrapper[4772]: I1128 12:13:47.998519 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-chccj/crc-debug-2xx4g" podStartSLOduration=1.998499887 podStartE2EDuration="1.998499887s" podCreationTimestamp="2025-11-28 12:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:13:47.995703275 +0000 UTC m=+4026.318946502" watchObservedRunningTime="2025-11-28 12:13:47.998499887 +0000 UTC m=+4026.321743114" Nov 28 12:13:49 crc kubenswrapper[4772]: I1128 12:13:49.995466 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:13:49 crc kubenswrapper[4772]: E1128 12:13:49.996539 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:14:04 crc kubenswrapper[4772]: I1128 12:14:04.994553 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:14:04 crc kubenswrapper[4772]: E1128 12:14:04.995554 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:14:16 crc kubenswrapper[4772]: I1128 12:14:16.995273 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:14:16 crc kubenswrapper[4772]: E1128 12:14:16.996102 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:14:19 crc kubenswrapper[4772]: I1128 12:14:19.268184 4772 generic.go:334] "Generic (PLEG): container finished" podID="1c3e43cb-38f1-43ab-9511-9b0b449ca8ad" containerID="aec3f00e0932a36a12b11011c73a78ca3852077892214a14105d8db593b52843" exitCode=0 Nov 28 12:14:19 crc kubenswrapper[4772]: I1128 12:14:19.268299 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chccj/crc-debug-2xx4g" event={"ID":"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad","Type":"ContainerDied","Data":"aec3f00e0932a36a12b11011c73a78ca3852077892214a14105d8db593b52843"} Nov 28 12:14:20 crc kubenswrapper[4772]: I1128 12:14:20.404969 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/crc-debug-2xx4g" Nov 28 12:14:20 crc kubenswrapper[4772]: I1128 12:14:20.447232 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-chccj/crc-debug-2xx4g"] Nov 28 12:14:20 crc kubenswrapper[4772]: I1128 12:14:20.456071 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-chccj/crc-debug-2xx4g"] Nov 28 12:14:20 crc kubenswrapper[4772]: I1128 12:14:20.607189 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmsbs\" (UniqueName: \"kubernetes.io/projected/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad-kube-api-access-gmsbs\") pod \"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad\" (UID: \"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad\") " Nov 28 12:14:20 crc kubenswrapper[4772]: I1128 12:14:20.607352 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad-host\") pod \"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad\" (UID: \"1c3e43cb-38f1-43ab-9511-9b0b449ca8ad\") " Nov 28 12:14:20 crc kubenswrapper[4772]: I1128 12:14:20.607763 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad-host" (OuterVolumeSpecName: "host") pod "1c3e43cb-38f1-43ab-9511-9b0b449ca8ad" (UID: "1c3e43cb-38f1-43ab-9511-9b0b449ca8ad"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:20 crc kubenswrapper[4772]: I1128 12:14:20.608154 4772 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad-host\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:20 crc kubenswrapper[4772]: I1128 12:14:20.618694 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad-kube-api-access-gmsbs" (OuterVolumeSpecName: "kube-api-access-gmsbs") pod "1c3e43cb-38f1-43ab-9511-9b0b449ca8ad" (UID: "1c3e43cb-38f1-43ab-9511-9b0b449ca8ad"). InnerVolumeSpecName "kube-api-access-gmsbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:14:20 crc kubenswrapper[4772]: I1128 12:14:20.709493 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmsbs\" (UniqueName: \"kubernetes.io/projected/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad-kube-api-access-gmsbs\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.239069 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xg9ps"] Nov 28 12:14:21 crc kubenswrapper[4772]: E1128 12:14:21.240407 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3e43cb-38f1-43ab-9511-9b0b449ca8ad" containerName="container-00" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.240636 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3e43cb-38f1-43ab-9511-9b0b449ca8ad" containerName="container-00" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.241120 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3e43cb-38f1-43ab-9511-9b0b449ca8ad" containerName="container-00" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.243794 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.250336 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xg9ps"] Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.291944 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a136aadb3b6a859fb9966ba20b2585ee84184e6a38b6d0f860ae9fef8172e906" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.292046 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/crc-debug-2xx4g" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.421260 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63746216-4644-4536-b990-f9805cc078da-catalog-content\") pod \"certified-operators-xg9ps\" (UID: \"63746216-4644-4536-b990-f9805cc078da\") " pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.421411 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63746216-4644-4536-b990-f9805cc078da-utilities\") pod \"certified-operators-xg9ps\" (UID: \"63746216-4644-4536-b990-f9805cc078da\") " pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.421444 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbt4m\" (UniqueName: \"kubernetes.io/projected/63746216-4644-4536-b990-f9805cc078da-kube-api-access-kbt4m\") pod \"certified-operators-xg9ps\" (UID: \"63746216-4644-4536-b990-f9805cc078da\") " pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.523579 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63746216-4644-4536-b990-f9805cc078da-utilities\") pod \"certified-operators-xg9ps\" (UID: \"63746216-4644-4536-b990-f9805cc078da\") " pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.523625 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbt4m\" (UniqueName: \"kubernetes.io/projected/63746216-4644-4536-b990-f9805cc078da-kube-api-access-kbt4m\") pod \"certified-operators-xg9ps\" (UID: \"63746216-4644-4536-b990-f9805cc078da\") " pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.523775 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63746216-4644-4536-b990-f9805cc078da-catalog-content\") pod \"certified-operators-xg9ps\" (UID: \"63746216-4644-4536-b990-f9805cc078da\") " pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.524571 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63746216-4644-4536-b990-f9805cc078da-utilities\") pod \"certified-operators-xg9ps\" (UID: \"63746216-4644-4536-b990-f9805cc078da\") " pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.524870 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63746216-4644-4536-b990-f9805cc078da-catalog-content\") pod \"certified-operators-xg9ps\" (UID: \"63746216-4644-4536-b990-f9805cc078da\") " pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.547011 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbt4m\" (UniqueName: \"kubernetes.io/projected/63746216-4644-4536-b990-f9805cc078da-kube-api-access-kbt4m\") pod \"certified-operators-xg9ps\" (UID: \"63746216-4644-4536-b990-f9805cc078da\") " pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.574813 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.837554 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-chccj/crc-debug-gvd4l"] Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.845021 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/crc-debug-gvd4l" Nov 28 12:14:21 crc kubenswrapper[4772]: I1128 12:14:21.847517 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-chccj"/"default-dockercfg-d2thh" Nov 28 12:14:22 crc kubenswrapper[4772]: I1128 12:14:22.012536 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c3e43cb-38f1-43ab-9511-9b0b449ca8ad" path="/var/lib/kubelet/pods/1c3e43cb-38f1-43ab-9511-9b0b449ca8ad/volumes" Nov 28 12:14:22 crc kubenswrapper[4772]: I1128 12:14:22.035754 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l76j\" (UniqueName: \"kubernetes.io/projected/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e-kube-api-access-9l76j\") pod \"crc-debug-gvd4l\" (UID: \"79ead1e6-d991-43a5-b41f-3ee2f93f1e7e\") " pod="openshift-must-gather-chccj/crc-debug-gvd4l" Nov 28 12:14:22 crc kubenswrapper[4772]: I1128 12:14:22.036108 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e-host\") pod \"crc-debug-gvd4l\" (UID: \"79ead1e6-d991-43a5-b41f-3ee2f93f1e7e\") " pod="openshift-must-gather-chccj/crc-debug-gvd4l" Nov 28 12:14:22 crc kubenswrapper[4772]: I1128 12:14:22.139628 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e-host\") pod \"crc-debug-gvd4l\" (UID: \"79ead1e6-d991-43a5-b41f-3ee2f93f1e7e\") " pod="openshift-must-gather-chccj/crc-debug-gvd4l" Nov 28 12:14:22 crc kubenswrapper[4772]: I1128 12:14:22.139916 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e-host\") pod \"crc-debug-gvd4l\" (UID: \"79ead1e6-d991-43a5-b41f-3ee2f93f1e7e\") " pod="openshift-must-gather-chccj/crc-debug-gvd4l" Nov 28 12:14:22 crc kubenswrapper[4772]: I1128 12:14:22.140886 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l76j\" (UniqueName: \"kubernetes.io/projected/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e-kube-api-access-9l76j\") pod \"crc-debug-gvd4l\" (UID: \"79ead1e6-d991-43a5-b41f-3ee2f93f1e7e\") " pod="openshift-must-gather-chccj/crc-debug-gvd4l" Nov 28 12:14:22 crc kubenswrapper[4772]: I1128 12:14:22.161198 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l76j\" (UniqueName: \"kubernetes.io/projected/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e-kube-api-access-9l76j\") pod \"crc-debug-gvd4l\" (UID: \"79ead1e6-d991-43a5-b41f-3ee2f93f1e7e\") " pod="openshift-must-gather-chccj/crc-debug-gvd4l" Nov 28 12:14:22 crc kubenswrapper[4772]: I1128 12:14:22.184874 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xg9ps"] Nov 28 12:14:22 crc kubenswrapper[4772]: I1128 12:14:22.188865 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/crc-debug-gvd4l" Nov 28 12:14:22 crc kubenswrapper[4772]: W1128 12:14:22.263056 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79ead1e6_d991_43a5_b41f_3ee2f93f1e7e.slice/crio-7783ca28322d8fd10a70679289797d0ebbebb12f0e4f612ce3c3965dc13193ed WatchSource:0}: Error finding container 7783ca28322d8fd10a70679289797d0ebbebb12f0e4f612ce3c3965dc13193ed: Status 404 returned error can't find the container with id 7783ca28322d8fd10a70679289797d0ebbebb12f0e4f612ce3c3965dc13193ed Nov 28 12:14:22 crc kubenswrapper[4772]: I1128 12:14:22.301073 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chccj/crc-debug-gvd4l" event={"ID":"79ead1e6-d991-43a5-b41f-3ee2f93f1e7e","Type":"ContainerStarted","Data":"7783ca28322d8fd10a70679289797d0ebbebb12f0e4f612ce3c3965dc13193ed"} Nov 28 12:14:22 crc kubenswrapper[4772]: I1128 12:14:22.304198 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xg9ps" event={"ID":"63746216-4644-4536-b990-f9805cc078da","Type":"ContainerStarted","Data":"281d7a6850757387044bfd34143492716017fbe61a32f96127647e0b14a6447d"} Nov 28 12:14:23 crc kubenswrapper[4772]: I1128 12:14:23.341046 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xg9ps" event={"ID":"63746216-4644-4536-b990-f9805cc078da","Type":"ContainerDied","Data":"5fc45da7f21012c48b3e443d852b2ecdeea21da24d25a09c35a2a15aedf94a4d"} Nov 28 12:14:23 crc kubenswrapper[4772]: I1128 12:14:23.341005 4772 generic.go:334] "Generic (PLEG): container finished" podID="63746216-4644-4536-b990-f9805cc078da" containerID="5fc45da7f21012c48b3e443d852b2ecdeea21da24d25a09c35a2a15aedf94a4d" exitCode=0 Nov 28 12:14:23 crc kubenswrapper[4772]: I1128 12:14:23.349309 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 12:14:23 crc kubenswrapper[4772]: I1128 12:14:23.350312 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chccj/crc-debug-gvd4l" event={"ID":"79ead1e6-d991-43a5-b41f-3ee2f93f1e7e","Type":"ContainerDied","Data":"512b49b485d0e3ae45db4e7a65c2f66af6b45e788df3159a2d1f40266afe9fe5"} Nov 28 12:14:23 crc kubenswrapper[4772]: I1128 12:14:23.349971 4772 generic.go:334] "Generic (PLEG): container finished" podID="79ead1e6-d991-43a5-b41f-3ee2f93f1e7e" containerID="512b49b485d0e3ae45db4e7a65c2f66af6b45e788df3159a2d1f40266afe9fe5" exitCode=0 Nov 28 12:14:23 crc kubenswrapper[4772]: I1128 12:14:23.784076 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-chccj/crc-debug-gvd4l"] Nov 28 12:14:23 crc kubenswrapper[4772]: I1128 12:14:23.792423 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-chccj/crc-debug-gvd4l"] Nov 28 12:14:24 crc kubenswrapper[4772]: I1128 12:14:24.472094 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/crc-debug-gvd4l" Nov 28 12:14:24 crc kubenswrapper[4772]: I1128 12:14:24.588101 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l76j\" (UniqueName: \"kubernetes.io/projected/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e-kube-api-access-9l76j\") pod \"79ead1e6-d991-43a5-b41f-3ee2f93f1e7e\" (UID: \"79ead1e6-d991-43a5-b41f-3ee2f93f1e7e\") " Nov 28 12:14:24 crc kubenswrapper[4772]: I1128 12:14:24.588231 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e-host\") pod \"79ead1e6-d991-43a5-b41f-3ee2f93f1e7e\" (UID: \"79ead1e6-d991-43a5-b41f-3ee2f93f1e7e\") " Nov 28 12:14:24 crc kubenswrapper[4772]: I1128 12:14:24.588317 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e-host" (OuterVolumeSpecName: "host") pod "79ead1e6-d991-43a5-b41f-3ee2f93f1e7e" (UID: "79ead1e6-d991-43a5-b41f-3ee2f93f1e7e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:24 crc kubenswrapper[4772]: I1128 12:14:24.589143 4772 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e-host\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:24 crc kubenswrapper[4772]: I1128 12:14:24.594199 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e-kube-api-access-9l76j" (OuterVolumeSpecName: "kube-api-access-9l76j") pod "79ead1e6-d991-43a5-b41f-3ee2f93f1e7e" (UID: "79ead1e6-d991-43a5-b41f-3ee2f93f1e7e"). InnerVolumeSpecName "kube-api-access-9l76j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:14:24 crc kubenswrapper[4772]: I1128 12:14:24.690830 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l76j\" (UniqueName: \"kubernetes.io/projected/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e-kube-api-access-9l76j\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.029595 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-chccj/crc-debug-tqlcn"] Nov 28 12:14:25 crc kubenswrapper[4772]: E1128 12:14:25.030025 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ead1e6-d991-43a5-b41f-3ee2f93f1e7e" containerName="container-00" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.030042 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ead1e6-d991-43a5-b41f-3ee2f93f1e7e" containerName="container-00" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.030220 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ead1e6-d991-43a5-b41f-3ee2f93f1e7e" containerName="container-00" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.030823 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/crc-debug-tqlcn" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.112815 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1-host\") pod \"crc-debug-tqlcn\" (UID: \"89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1\") " pod="openshift-must-gather-chccj/crc-debug-tqlcn" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.112893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7prq\" (UniqueName: \"kubernetes.io/projected/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1-kube-api-access-j7prq\") pod \"crc-debug-tqlcn\" (UID: \"89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1\") " pod="openshift-must-gather-chccj/crc-debug-tqlcn" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.214881 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7prq\" (UniqueName: \"kubernetes.io/projected/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1-kube-api-access-j7prq\") pod \"crc-debug-tqlcn\" (UID: \"89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1\") " pod="openshift-must-gather-chccj/crc-debug-tqlcn" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.215047 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1-host\") pod \"crc-debug-tqlcn\" (UID: \"89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1\") " pod="openshift-must-gather-chccj/crc-debug-tqlcn" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.215202 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1-host\") pod \"crc-debug-tqlcn\" (UID: \"89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1\") " pod="openshift-must-gather-chccj/crc-debug-tqlcn" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.243084 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7prq\" (UniqueName: \"kubernetes.io/projected/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1-kube-api-access-j7prq\") pod \"crc-debug-tqlcn\" (UID: \"89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1\") " pod="openshift-must-gather-chccj/crc-debug-tqlcn" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.351514 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/crc-debug-tqlcn" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.369119 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/crc-debug-gvd4l" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.369156 4772 scope.go:117] "RemoveContainer" containerID="512b49b485d0e3ae45db4e7a65c2f66af6b45e788df3159a2d1f40266afe9fe5" Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.375500 4772 generic.go:334] "Generic (PLEG): container finished" podID="63746216-4644-4536-b990-f9805cc078da" containerID="fde47f725476b24f4ec942e2b163ea9f2458791d44895dbeb13e90308127ff5e" exitCode=0 Nov 28 12:14:25 crc kubenswrapper[4772]: I1128 12:14:25.375555 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xg9ps" event={"ID":"63746216-4644-4536-b990-f9805cc078da","Type":"ContainerDied","Data":"fde47f725476b24f4ec942e2b163ea9f2458791d44895dbeb13e90308127ff5e"} Nov 28 12:14:25 crc kubenswrapper[4772]: W1128 12:14:25.384639 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89a598b3_7b07_4a4d_8a78_bee9d1cbdcd1.slice/crio-566c63da68e84f7e8a87cee9d4cac1799b6fdbf3471cf55571ada6b51e81a417 WatchSource:0}: Error finding container 566c63da68e84f7e8a87cee9d4cac1799b6fdbf3471cf55571ada6b51e81a417: Status 404 returned error can't find the container with id 566c63da68e84f7e8a87cee9d4cac1799b6fdbf3471cf55571ada6b51e81a417 Nov 28 12:14:26 crc kubenswrapper[4772]: I1128 12:14:26.006913 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79ead1e6-d991-43a5-b41f-3ee2f93f1e7e" path="/var/lib/kubelet/pods/79ead1e6-d991-43a5-b41f-3ee2f93f1e7e/volumes" Nov 28 12:14:26 crc kubenswrapper[4772]: I1128 12:14:26.389949 4772 generic.go:334] "Generic (PLEG): container finished" podID="89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1" containerID="524d2a07504aad946a34d895223faab3afb5e3e4e80695e2026fe9ec9bb78297" exitCode=0 Nov 28 12:14:26 crc kubenswrapper[4772]: I1128 12:14:26.390047 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chccj/crc-debug-tqlcn" event={"ID":"89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1","Type":"ContainerDied","Data":"524d2a07504aad946a34d895223faab3afb5e3e4e80695e2026fe9ec9bb78297"} Nov 28 12:14:26 crc kubenswrapper[4772]: I1128 12:14:26.390168 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chccj/crc-debug-tqlcn" event={"ID":"89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1","Type":"ContainerStarted","Data":"566c63da68e84f7e8a87cee9d4cac1799b6fdbf3471cf55571ada6b51e81a417"} Nov 28 12:14:26 crc kubenswrapper[4772]: I1128 12:14:26.393327 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xg9ps" event={"ID":"63746216-4644-4536-b990-f9805cc078da","Type":"ContainerStarted","Data":"99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca"} Nov 28 12:14:26 crc kubenswrapper[4772]: I1128 12:14:26.425384 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-chccj/crc-debug-tqlcn"] Nov 28 12:14:26 crc kubenswrapper[4772]: I1128 12:14:26.436272 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-chccj/crc-debug-tqlcn"] Nov 28 12:14:26 crc kubenswrapper[4772]: I1128 12:14:26.444189 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xg9ps" podStartSLOduration=2.98749816 podStartE2EDuration="5.444163686s" podCreationTimestamp="2025-11-28 12:14:21 +0000 UTC" firstStartedPulling="2025-11-28 12:14:23.349074229 +0000 UTC m=+4061.672317456" lastFinishedPulling="2025-11-28 12:14:25.805739755 +0000 UTC m=+4064.128982982" observedRunningTime="2025-11-28 12:14:26.422070707 +0000 UTC m=+4064.745313944" watchObservedRunningTime="2025-11-28 12:14:26.444163686 +0000 UTC m=+4064.767406913" Nov 28 12:14:27 crc kubenswrapper[4772]: I1128 12:14:27.516302 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/crc-debug-tqlcn" Nov 28 12:14:27 crc kubenswrapper[4772]: I1128 12:14:27.574486 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7prq\" (UniqueName: \"kubernetes.io/projected/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1-kube-api-access-j7prq\") pod \"89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1\" (UID: \"89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1\") " Nov 28 12:14:27 crc kubenswrapper[4772]: I1128 12:14:27.574656 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1-host\") pod \"89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1\" (UID: \"89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1\") " Nov 28 12:14:27 crc kubenswrapper[4772]: I1128 12:14:27.574811 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1-host" (OuterVolumeSpecName: "host") pod "89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1" (UID: "89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:27 crc kubenswrapper[4772]: I1128 12:14:27.575520 4772 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1-host\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:27 crc kubenswrapper[4772]: I1128 12:14:27.582645 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1-kube-api-access-j7prq" (OuterVolumeSpecName: "kube-api-access-j7prq") pod "89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1" (UID: "89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1"). InnerVolumeSpecName "kube-api-access-j7prq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:14:27 crc kubenswrapper[4772]: I1128 12:14:27.677494 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7prq\" (UniqueName: \"kubernetes.io/projected/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1-kube-api-access-j7prq\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:28 crc kubenswrapper[4772]: I1128 12:14:28.009107 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1" path="/var/lib/kubelet/pods/89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1/volumes" Nov 28 12:14:28 crc kubenswrapper[4772]: I1128 12:14:28.413507 4772 scope.go:117] "RemoveContainer" containerID="524d2a07504aad946a34d895223faab3afb5e3e4e80695e2026fe9ec9bb78297" Nov 28 12:14:28 crc kubenswrapper[4772]: I1128 12:14:28.413556 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/crc-debug-tqlcn" Nov 28 12:14:31 crc kubenswrapper[4772]: I1128 12:14:31.576572 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:31 crc kubenswrapper[4772]: I1128 12:14:31.577171 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:31 crc kubenswrapper[4772]: I1128 12:14:31.631833 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:32 crc kubenswrapper[4772]: I1128 12:14:32.002043 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:14:32 crc kubenswrapper[4772]: E1128 12:14:32.002849 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:14:32 crc kubenswrapper[4772]: I1128 12:14:32.521544 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:32 crc kubenswrapper[4772]: I1128 12:14:32.583246 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xg9ps"] Nov 28 12:14:34 crc kubenswrapper[4772]: I1128 12:14:34.487001 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xg9ps" podUID="63746216-4644-4536-b990-f9805cc078da" containerName="registry-server" containerID="cri-o://99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca" gracePeriod=2 Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.044566 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.225753 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63746216-4644-4536-b990-f9805cc078da-utilities\") pod \"63746216-4644-4536-b990-f9805cc078da\" (UID: \"63746216-4644-4536-b990-f9805cc078da\") " Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.226039 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbt4m\" (UniqueName: \"kubernetes.io/projected/63746216-4644-4536-b990-f9805cc078da-kube-api-access-kbt4m\") pod \"63746216-4644-4536-b990-f9805cc078da\" (UID: \"63746216-4644-4536-b990-f9805cc078da\") " Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.226227 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63746216-4644-4536-b990-f9805cc078da-catalog-content\") pod \"63746216-4644-4536-b990-f9805cc078da\" (UID: \"63746216-4644-4536-b990-f9805cc078da\") " Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.227244 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63746216-4644-4536-b990-f9805cc078da-utilities" (OuterVolumeSpecName: "utilities") pod "63746216-4644-4536-b990-f9805cc078da" (UID: "63746216-4644-4536-b990-f9805cc078da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.232618 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63746216-4644-4536-b990-f9805cc078da-kube-api-access-kbt4m" (OuterVolumeSpecName: "kube-api-access-kbt4m") pod "63746216-4644-4536-b990-f9805cc078da" (UID: "63746216-4644-4536-b990-f9805cc078da"). InnerVolumeSpecName "kube-api-access-kbt4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.273788 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63746216-4644-4536-b990-f9805cc078da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63746216-4644-4536-b990-f9805cc078da" (UID: "63746216-4644-4536-b990-f9805cc078da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.328677 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63746216-4644-4536-b990-f9805cc078da-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.328704 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63746216-4644-4536-b990-f9805cc078da-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.328715 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbt4m\" (UniqueName: \"kubernetes.io/projected/63746216-4644-4536-b990-f9805cc078da-kube-api-access-kbt4m\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.505688 4772 generic.go:334] "Generic (PLEG): container finished" podID="63746216-4644-4536-b990-f9805cc078da" containerID="99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca" exitCode=0 Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.505757 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xg9ps" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.505794 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xg9ps" event={"ID":"63746216-4644-4536-b990-f9805cc078da","Type":"ContainerDied","Data":"99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca"} Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.505905 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xg9ps" event={"ID":"63746216-4644-4536-b990-f9805cc078da","Type":"ContainerDied","Data":"281d7a6850757387044bfd34143492716017fbe61a32f96127647e0b14a6447d"} Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.505975 4772 scope.go:117] "RemoveContainer" containerID="99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.531310 4772 scope.go:117] "RemoveContainer" containerID="fde47f725476b24f4ec942e2b163ea9f2458791d44895dbeb13e90308127ff5e" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.547023 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xg9ps"] Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.560091 4772 scope.go:117] "RemoveContainer" containerID="5fc45da7f21012c48b3e443d852b2ecdeea21da24d25a09c35a2a15aedf94a4d" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.562593 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xg9ps"] Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.636905 4772 scope.go:117] "RemoveContainer" containerID="99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca" Nov 28 12:14:35 crc kubenswrapper[4772]: E1128 12:14:35.640144 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca\": container with ID starting with 99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca not found: ID does not exist" containerID="99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.640202 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca"} err="failed to get container status \"99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca\": rpc error: code = NotFound desc = could not find container \"99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca\": container with ID starting with 99d3b487505be16e3e1d054f2bebaff7930f7acfbf2e2fc8c446684585aeb8ca not found: ID does not exist" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.640237 4772 scope.go:117] "RemoveContainer" containerID="fde47f725476b24f4ec942e2b163ea9f2458791d44895dbeb13e90308127ff5e" Nov 28 12:14:35 crc kubenswrapper[4772]: E1128 12:14:35.641822 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde47f725476b24f4ec942e2b163ea9f2458791d44895dbeb13e90308127ff5e\": container with ID starting with fde47f725476b24f4ec942e2b163ea9f2458791d44895dbeb13e90308127ff5e not found: ID does not exist" containerID="fde47f725476b24f4ec942e2b163ea9f2458791d44895dbeb13e90308127ff5e" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.641865 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde47f725476b24f4ec942e2b163ea9f2458791d44895dbeb13e90308127ff5e"} err="failed to get container status \"fde47f725476b24f4ec942e2b163ea9f2458791d44895dbeb13e90308127ff5e\": rpc error: code = NotFound desc = could not find container \"fde47f725476b24f4ec942e2b163ea9f2458791d44895dbeb13e90308127ff5e\": container with ID starting with fde47f725476b24f4ec942e2b163ea9f2458791d44895dbeb13e90308127ff5e not found: ID does not exist" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.641891 4772 scope.go:117] "RemoveContainer" containerID="5fc45da7f21012c48b3e443d852b2ecdeea21da24d25a09c35a2a15aedf94a4d" Nov 28 12:14:35 crc kubenswrapper[4772]: E1128 12:14:35.644626 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc45da7f21012c48b3e443d852b2ecdeea21da24d25a09c35a2a15aedf94a4d\": container with ID starting with 5fc45da7f21012c48b3e443d852b2ecdeea21da24d25a09c35a2a15aedf94a4d not found: ID does not exist" containerID="5fc45da7f21012c48b3e443d852b2ecdeea21da24d25a09c35a2a15aedf94a4d" Nov 28 12:14:35 crc kubenswrapper[4772]: I1128 12:14:35.644661 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc45da7f21012c48b3e443d852b2ecdeea21da24d25a09c35a2a15aedf94a4d"} err="failed to get container status \"5fc45da7f21012c48b3e443d852b2ecdeea21da24d25a09c35a2a15aedf94a4d\": rpc error: code = NotFound desc = could not find container \"5fc45da7f21012c48b3e443d852b2ecdeea21da24d25a09c35a2a15aedf94a4d\": container with ID starting with 5fc45da7f21012c48b3e443d852b2ecdeea21da24d25a09c35a2a15aedf94a4d not found: ID does not exist" Nov 28 12:14:36 crc kubenswrapper[4772]: I1128 12:14:36.009322 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63746216-4644-4536-b990-f9805cc078da" path="/var/lib/kubelet/pods/63746216-4644-4536-b990-f9805cc078da/volumes" Nov 28 12:14:46 crc kubenswrapper[4772]: I1128 12:14:46.994494 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:14:46 crc kubenswrapper[4772]: E1128 12:14:46.995080 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:14:57 crc kubenswrapper[4772]: I1128 12:14:57.338409 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-667fbdd95d-n4zbv_863244d7-6e70-4ac3-a7f1-485205de6c8e/barbican-api/0.log" Nov 28 12:14:57 crc kubenswrapper[4772]: I1128 12:14:57.409298 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-667fbdd95d-n4zbv_863244d7-6e70-4ac3-a7f1-485205de6c8e/barbican-api-log/0.log" Nov 28 12:14:57 crc kubenswrapper[4772]: I1128 12:14:57.525748 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-bd5f4f5b6-n56r2_c591ea97-2d66-45fe-85e2-1c22c6af8218/barbican-keystone-listener/0.log" Nov 28 12:14:57 crc kubenswrapper[4772]: I1128 12:14:57.659340 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-bd5f4f5b6-n56r2_c591ea97-2d66-45fe-85e2-1c22c6af8218/barbican-keystone-listener-log/0.log" Nov 28 12:14:57 crc kubenswrapper[4772]: I1128 12:14:57.666523 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86946c9dff-9km2x_8f72d166-9d59-443d-9af2-3d93c158ef98/barbican-worker/0.log" Nov 28 12:14:57 crc kubenswrapper[4772]: I1128 12:14:57.768478 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86946c9dff-9km2x_8f72d166-9d59-443d-9af2-3d93c158ef98/barbican-worker-log/0.log" Nov 28 12:14:57 crc kubenswrapper[4772]: I1128 12:14:57.882987 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9bjxt_ab57579b-65be-4ef3-977f-574ca00f3d9a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:14:57 crc kubenswrapper[4772]: I1128 12:14:57.995211 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3d55dc2a-8d2f-4f27-82ef-11744255c40c/ceilometer-central-agent/0.log" Nov 28 12:14:58 crc kubenswrapper[4772]: I1128 12:14:58.044881 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3d55dc2a-8d2f-4f27-82ef-11744255c40c/proxy-httpd/0.log" Nov 28 12:14:58 crc kubenswrapper[4772]: I1128 12:14:58.068191 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3d55dc2a-8d2f-4f27-82ef-11744255c40c/ceilometer-notification-agent/0.log" Nov 28 12:14:58 crc kubenswrapper[4772]: I1128 12:14:58.112804 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3d55dc2a-8d2f-4f27-82ef-11744255c40c/sg-core/0.log" Nov 28 12:14:58 crc kubenswrapper[4772]: I1128 12:14:58.249090 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_16a2193c-fc2c-489d-9aff-edfe826fdb75/cinder-api/0.log" Nov 28 12:14:58 crc kubenswrapper[4772]: I1128 12:14:58.265600 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_16a2193c-fc2c-489d-9aff-edfe826fdb75/cinder-api-log/0.log" Nov 28 12:14:58 crc kubenswrapper[4772]: I1128 12:14:58.471347 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c/probe/0.log" Nov 28 12:14:58 crc kubenswrapper[4772]: I1128 12:14:58.474060 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7a1d37cd-e554-4bb3-a3b4-e2338fc64a4c/cinder-scheduler/0.log" Nov 28 12:14:58 crc kubenswrapper[4772]: I1128 12:14:58.513634 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-68zj7_9cf12a49-cbf2-4721-92a0-e4f9f88deb0c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:14:58 crc kubenswrapper[4772]: I1128 12:14:58.648183 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9pd6j_b5a1a6ce-5a94-4f1a-a6c4-5bd7b71782fe/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:14:58 crc kubenswrapper[4772]: I1128 12:14:58.715663 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-shv6k_03f3a9a0-ccb5-41d6-8ba8-419fb9775213/init/0.log" Nov 28 12:14:59 crc kubenswrapper[4772]: I1128 12:14:59.524276 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-shv6k_03f3a9a0-ccb5-41d6-8ba8-419fb9775213/init/0.log" Nov 28 12:14:59 crc kubenswrapper[4772]: I1128 12:14:59.546875 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-dx99x_49998ed7-1acb-4812-af0d-822d07292334/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:14:59 crc kubenswrapper[4772]: I1128 12:14:59.615663 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-shv6k_03f3a9a0-ccb5-41d6-8ba8-419fb9775213/dnsmasq-dns/0.log" Nov 28 12:14:59 crc kubenswrapper[4772]: I1128 12:14:59.759186 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_94aac27d-0e3b-431a-be6e-a88e1eeb16db/glance-httpd/0.log" Nov 28 12:14:59 crc kubenswrapper[4772]: I1128 12:14:59.803511 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_94aac27d-0e3b-431a-be6e-a88e1eeb16db/glance-log/0.log" Nov 28 12:14:59 crc kubenswrapper[4772]: I1128 12:14:59.922828 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_090e21bf-8aa4-47f0-99ce-d8225cdac91c/glance-log/0.log" Nov 28 12:14:59 crc kubenswrapper[4772]: I1128 12:14:59.949987 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_090e21bf-8aa4-47f0-99ce-d8225cdac91c/glance-httpd/0.log" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.135971 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f99664784-xpqjq_3a9ada7a-c788-41ad-87a6-431ba8c94394/horizon/0.log" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.170812 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k"] Nov 28 12:15:00 crc kubenswrapper[4772]: E1128 12:15:00.171261 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1" containerName="container-00" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.171280 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1" containerName="container-00" Nov 28 12:15:00 crc kubenswrapper[4772]: E1128 12:15:00.171299 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63746216-4644-4536-b990-f9805cc078da" containerName="extract-content" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.171306 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="63746216-4644-4536-b990-f9805cc078da" containerName="extract-content" Nov 28 12:15:00 crc kubenswrapper[4772]: E1128 12:15:00.171335 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63746216-4644-4536-b990-f9805cc078da" containerName="registry-server" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.171341 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="63746216-4644-4536-b990-f9805cc078da" containerName="registry-server" Nov 28 12:15:00 crc kubenswrapper[4772]: E1128 12:15:00.171385 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63746216-4644-4536-b990-f9805cc078da" containerName="extract-utilities" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.171394 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="63746216-4644-4536-b990-f9805cc078da" containerName="extract-utilities" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.171582 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a598b3-7b07-4a4d-8a78-bee9d1cbdcd1" containerName="container-00" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.171608 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="63746216-4644-4536-b990-f9805cc078da" containerName="registry-server" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.172196 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.175226 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.175525 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.182043 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k"] Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.202973 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-94trn_99b393cd-c63f-4007-9cb8-26a6e5710794/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.266493 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-config-volume\") pod \"collect-profiles-29405535-xqg2k\" (UID: \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.266551 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-secret-volume\") pod \"collect-profiles-29405535-xqg2k\" (UID: \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.266604 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zbc5\" (UniqueName: \"kubernetes.io/projected/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-kube-api-access-9zbc5\") pod \"collect-profiles-29405535-xqg2k\" (UID: \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.368012 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-config-volume\") pod \"collect-profiles-29405535-xqg2k\" (UID: \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.368072 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-secret-volume\") pod \"collect-profiles-29405535-xqg2k\" (UID: \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.368126 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zbc5\" (UniqueName: \"kubernetes.io/projected/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-kube-api-access-9zbc5\") pod \"collect-profiles-29405535-xqg2k\" (UID: \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.369340 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-config-volume\") pod \"collect-profiles-29405535-xqg2k\" (UID: \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.427387 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-78lv5_e65b46c0-b274-454b-b98d-b2425334abfd/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.567325 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f99664784-xpqjq_3a9ada7a-c788-41ad-87a6-431ba8c94394/horizon-log/0.log" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.713300 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-56987d8b67-lwl5z_473cc657-5696-4761-a692-e4929954d45b/keystone-api/0.log" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.837506 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-secret-volume\") pod \"collect-profiles-29405535-xqg2k\" (UID: \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:00 crc kubenswrapper[4772]: I1128 12:15:00.837575 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zbc5\" (UniqueName: \"kubernetes.io/projected/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-kube-api-access-9zbc5\") pod \"collect-profiles-29405535-xqg2k\" (UID: \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:01 crc kubenswrapper[4772]: I1128 12:15:01.010516 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3bfc712b-ffa2-4fc0-825c-def988a3f1b2/kube-state-metrics/0.log" Nov 28 12:15:01 crc kubenswrapper[4772]: I1128 12:15:01.023336 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29405521-g4n7r_058bf0b5-6899-4a5d-a098-e40b52cfd512/keystone-cron/0.log" Nov 28 12:15:01 crc kubenswrapper[4772]: I1128 12:15:01.090013 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:01 crc kubenswrapper[4772]: I1128 12:15:01.217848 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dwhpp_0b865b7c-a1c7-4f0b-b289-d980f76a946d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:15:01 crc kubenswrapper[4772]: I1128 12:15:01.561850 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-758875cc6f-fmsqk_d251a047-3f90-4db3-8cae-b65b24395fdf/neutron-httpd/0.log" Nov 28 12:15:01 crc kubenswrapper[4772]: I1128 12:15:01.595074 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-758875cc6f-fmsqk_d251a047-3f90-4db3-8cae-b65b24395fdf/neutron-api/0.log" Nov 28 12:15:01 crc kubenswrapper[4772]: I1128 12:15:01.610447 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k"] Nov 28 12:15:01 crc kubenswrapper[4772]: I1128 12:15:01.643756 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-swrxg_49725426-9a39-40a3-8921-dadd52884a4a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:15:01 crc kubenswrapper[4772]: I1128 12:15:01.826698 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" event={"ID":"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1","Type":"ContainerStarted","Data":"a8b5599d0d79466a8c594dbda5bc707f6faded4e68f75443c4cb4ca412df4fd1"} Nov 28 12:15:01 crc kubenswrapper[4772]: I1128 12:15:01.826947 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" event={"ID":"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1","Type":"ContainerStarted","Data":"b768cc05ab45ff4459520f12fa2d3e2f522b7348a4499054afbad857b29ce422"} Nov 28 12:15:01 crc kubenswrapper[4772]: I1128 12:15:01.864192 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" podStartSLOduration=1.864168582 podStartE2EDuration="1.864168582s" podCreationTimestamp="2025-11-28 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:15:01.860100008 +0000 UTC m=+4100.183343235" watchObservedRunningTime="2025-11-28 12:15:01.864168582 +0000 UTC m=+4100.187411809" Nov 28 12:15:02 crc kubenswrapper[4772]: I1128 12:15:02.001061 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:15:02 crc kubenswrapper[4772]: E1128 12:15:02.001346 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:15:02 crc kubenswrapper[4772]: I1128 12:15:02.235352 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_04cf07d5-2221-4a24-af67-730b13cd2021/nova-api-log/0.log" Nov 28 12:15:02 crc kubenswrapper[4772]: I1128 12:15:02.375759 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_367a81eb-3924-42ce-8fcf-258e2ad0b494/nova-cell0-conductor-conductor/0.log" Nov 28 12:15:02 crc kubenswrapper[4772]: I1128 12:15:02.569547 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4e7b45c1-3ed2-4693-b75b-faf73867de92/nova-cell1-conductor-conductor/0.log" Nov 28 12:15:02 crc kubenswrapper[4772]: I1128 12:15:02.841687 4772 generic.go:334] "Generic (PLEG): container finished" podID="25eb67f0-8a4d-4333-ab0f-e6461b6e98f1" containerID="a8b5599d0d79466a8c594dbda5bc707f6faded4e68f75443c4cb4ca412df4fd1" exitCode=0 Nov 28 12:15:02 crc kubenswrapper[4772]: I1128 12:15:02.841727 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" event={"ID":"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1","Type":"ContainerDied","Data":"a8b5599d0d79466a8c594dbda5bc707f6faded4e68f75443c4cb4ca412df4fd1"} Nov 28 12:15:02 crc kubenswrapper[4772]: I1128 12:15:02.877290 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-pl4v2_3bcf40ed-6681-4685-8277-c31e223c9686/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:15:02 crc kubenswrapper[4772]: I1128 12:15:02.922112 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_04cf07d5-2221-4a24-af67-730b13cd2021/nova-api-api/0.log" Nov 28 12:15:02 crc kubenswrapper[4772]: I1128 12:15:02.923113 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_9bb843e3-1f1d-4d9d-8e2b-aa7d3cfc170a/nova-cell1-novncproxy-novncproxy/0.log" Nov 28 12:15:03 crc kubenswrapper[4772]: I1128 12:15:03.317699 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_36168857-1f1a-48f9-8adb-53889086486e/nova-metadata-log/0.log" Nov 28 12:15:03 crc kubenswrapper[4772]: I1128 12:15:03.529867 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d68d6b7e-6515-4085-82db-aa7d361d06e6/nova-scheduler-scheduler/0.log" Nov 28 12:15:03 crc kubenswrapper[4772]: I1128 12:15:03.545280 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fbc92675-93c6-4d66-afb0-d83636cbf853/mysql-bootstrap/0.log" Nov 28 12:15:03 crc kubenswrapper[4772]: I1128 12:15:03.735757 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fbc92675-93c6-4d66-afb0-d83636cbf853/mysql-bootstrap/0.log" Nov 28 12:15:03 crc kubenswrapper[4772]: I1128 12:15:03.777953 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fbc92675-93c6-4d66-afb0-d83636cbf853/galera/0.log" Nov 28 12:15:03 crc kubenswrapper[4772]: I1128 12:15:03.898874 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20/mysql-bootstrap/0.log" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.235772 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20/galera/0.log" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.241256 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.248099 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee1ab95d-9a21-4dc0-b78e-dfcb4e597c20/mysql-bootstrap/0.log" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.346990 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-secret-volume\") pod \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\" (UID: \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\") " Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.347806 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-config-volume" (OuterVolumeSpecName: "config-volume") pod "25eb67f0-8a4d-4333-ab0f-e6461b6e98f1" (UID: "25eb67f0-8a4d-4333-ab0f-e6461b6e98f1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.347339 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-config-volume\") pod \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\" (UID: \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\") " Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.347917 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zbc5\" (UniqueName: \"kubernetes.io/projected/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-kube-api-access-9zbc5\") pod \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\" (UID: \"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1\") " Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.348386 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.354723 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-kube-api-access-9zbc5" (OuterVolumeSpecName: "kube-api-access-9zbc5") pod "25eb67f0-8a4d-4333-ab0f-e6461b6e98f1" (UID: "25eb67f0-8a4d-4333-ab0f-e6461b6e98f1"). InnerVolumeSpecName "kube-api-access-9zbc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.365934 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "25eb67f0-8a4d-4333-ab0f-e6461b6e98f1" (UID: "25eb67f0-8a4d-4333-ab0f-e6461b6e98f1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.412493 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_90047d06-b7f8-416f-a4f1-6f76b5b94f39/openstackclient/0.log" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.448095 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9xz5n_620b05d3-f04b-4d52-b3c3-039dcc751696/openstack-network-exporter/0.log" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.450338 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.450382 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zbc5\" (UniqueName: \"kubernetes.io/projected/25eb67f0-8a4d-4333-ab0f-e6461b6e98f1-kube-api-access-9zbc5\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.671939 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gjbm2_dbdbf695-f81d-431b-8330-6745cdbf9ab1/ovsdb-server-init/0.log" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.672381 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w"] Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.680174 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405490-lxb9w"] Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.842020 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gjbm2_dbdbf695-f81d-431b-8330-6745cdbf9ab1/ovsdb-server-init/0.log" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.852746 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gjbm2_dbdbf695-f81d-431b-8330-6745cdbf9ab1/ovs-vswitchd/0.log" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.873754 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" event={"ID":"25eb67f0-8a4d-4333-ab0f-e6461b6e98f1","Type":"ContainerDied","Data":"b768cc05ab45ff4459520f12fa2d3e2f522b7348a4499054afbad857b29ce422"} Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.873786 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-xqg2k" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.873794 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b768cc05ab45ff4459520f12fa2d3e2f522b7348a4499054afbad857b29ce422" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.903084 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gjbm2_dbdbf695-f81d-431b-8330-6745cdbf9ab1/ovsdb-server/0.log" Nov 28 12:15:04 crc kubenswrapper[4772]: I1128 12:15:04.942799 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_36168857-1f1a-48f9-8adb-53889086486e/nova-metadata-metadata/0.log" Nov 28 12:15:05 crc kubenswrapper[4772]: I1128 12:15:05.043519 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qzrrh_42fee486-89c9-4f0e-9db6-ac695b62a588/ovn-controller/0.log" Nov 28 12:15:05 crc kubenswrapper[4772]: I1128 12:15:05.195181 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9lxg4_9b6d18b1-abcf-4c1f-b811-6df572185255/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:15:05 crc kubenswrapper[4772]: I1128 12:15:05.289053 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6a9655ce-0d05-449e-899d-6cbaa25cd5e9/ovn-northd/0.log" Nov 28 12:15:05 crc kubenswrapper[4772]: I1128 12:15:05.297223 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6a9655ce-0d05-449e-899d-6cbaa25cd5e9/openstack-network-exporter/0.log" Nov 28 12:15:05 crc kubenswrapper[4772]: I1128 12:15:05.465654 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_934fc42a-8a76-4b95-9ad0-5e13fa47d1cb/openstack-network-exporter/0.log" Nov 28 12:15:05 crc kubenswrapper[4772]: I1128 12:15:05.664568 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_934fc42a-8a76-4b95-9ad0-5e13fa47d1cb/ovsdbserver-nb/0.log" Nov 28 12:15:05 crc kubenswrapper[4772]: I1128 12:15:05.803192 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ec35ef2e-e0de-4d42-9ab0-35033d549ac9/openstack-network-exporter/0.log" Nov 28 12:15:05 crc kubenswrapper[4772]: I1128 12:15:05.810344 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ec35ef2e-e0de-4d42-9ab0-35033d549ac9/ovsdbserver-sb/0.log" Nov 28 12:15:05 crc kubenswrapper[4772]: I1128 12:15:05.938590 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74fc54dcd4-9z4wp_0c712f4c-0d11-4e33-a725-4a5ec8f62c5f/placement-api/0.log" Nov 28 12:15:06 crc kubenswrapper[4772]: I1128 12:15:06.006034 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dcd8659-535c-4cd7-9a08-f7a67afbadcd" path="/var/lib/kubelet/pods/9dcd8659-535c-4cd7-9a08-f7a67afbadcd/volumes" Nov 28 12:15:06 crc kubenswrapper[4772]: I1128 12:15:06.091994 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74fc54dcd4-9z4wp_0c712f4c-0d11-4e33-a725-4a5ec8f62c5f/placement-log/0.log" Nov 28 12:15:06 crc kubenswrapper[4772]: I1128 12:15:06.140696 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9e3b8854-8b5b-441d-97a7-12e48cffafb6/setup-container/0.log" Nov 28 12:15:06 crc kubenswrapper[4772]: I1128 12:15:06.359522 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9e3b8854-8b5b-441d-97a7-12e48cffafb6/setup-container/0.log" Nov 28 12:15:06 crc kubenswrapper[4772]: I1128 12:15:06.419571 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1a8859cb-c89a-4d2c-ac6b-6abd31388e61/setup-container/0.log" Nov 28 12:15:06 crc kubenswrapper[4772]: I1128 12:15:06.496374 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9e3b8854-8b5b-441d-97a7-12e48cffafb6/rabbitmq/0.log" Nov 28 12:15:06 crc kubenswrapper[4772]: I1128 12:15:06.682211 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1a8859cb-c89a-4d2c-ac6b-6abd31388e61/setup-container/0.log" Nov 28 12:15:06 crc kubenswrapper[4772]: I1128 12:15:06.704877 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1a8859cb-c89a-4d2c-ac6b-6abd31388e61/rabbitmq/0.log" Nov 28 12:15:06 crc kubenswrapper[4772]: I1128 12:15:06.795902 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-t4xlp_02da511d-7da9-49de-bed3-34ecbf58b864/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:15:06 crc kubenswrapper[4772]: I1128 12:15:06.941781 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-dkl6g_62194471-8bd2-46f1-9891-e2bfe7cf8d67/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:15:07 crc kubenswrapper[4772]: I1128 12:15:07.000176 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5wnnw_8a8788a2-2374-4b48-b5c6-f6ab77a21711/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:15:07 crc kubenswrapper[4772]: I1128 12:15:07.123179 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-mxktm_a262a424-caf0-4d6e-95e6-c0ca5ff2473b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:15:07 crc kubenswrapper[4772]: I1128 12:15:07.231118 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wwp4z_8e844804-6f77-4b6c-93c7-cdc083c5673d/ssh-known-hosts-edpm-deployment/0.log" Nov 28 12:15:07 crc kubenswrapper[4772]: I1128 12:15:07.485189 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d5fc6594c-kj2rm_4c10f0c5-2315-469e-bda3-d3b66ab776e6/proxy-server/0.log" Nov 28 12:15:07 crc kubenswrapper[4772]: I1128 12:15:07.495955 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d5fc6594c-kj2rm_4c10f0c5-2315-469e-bda3-d3b66ab776e6/proxy-httpd/0.log" Nov 28 12:15:07 crc kubenswrapper[4772]: I1128 12:15:07.571896 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wmtrl_1d692e87-9cb2-4bf1-ae67-f23e0c59e5a6/swift-ring-rebalance/0.log" Nov 28 12:15:07 crc kubenswrapper[4772]: I1128 12:15:07.679838 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/account-reaper/0.log" Nov 28 12:15:07 crc kubenswrapper[4772]: I1128 12:15:07.740838 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/account-auditor/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.170782 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/account-server/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.170823 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/container-auditor/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.182855 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/container-replicator/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.188094 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/account-replicator/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.382712 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/container-server/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.407207 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/object-auditor/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.434500 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/container-updater/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.435309 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/object-expirer/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.620368 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/object-replicator/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.641259 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/object-server/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.650471 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/object-updater/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.665669 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/rsync/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.841309 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e9a12326-4e22-49fc-a6d5-b103867d9d0c/swift-recon-cron/0.log" Nov 28 12:15:08 crc kubenswrapper[4772]: I1128 12:15:08.868468 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-9srrh_bed1e099-94a0-45ab-9686-4488e1df9252/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:15:09 crc kubenswrapper[4772]: I1128 12:15:09.050136 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_39592588-10c2-45fd-88fb-cb63f200c871/tempest-tests-tempest-tests-runner/0.log" Nov 28 12:15:09 crc kubenswrapper[4772]: I1128 12:15:09.056521 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3e1800cb-ecd9-443b-b9a9-6437d8abbfc7/test-operator-logs-container/0.log" Nov 28 12:15:09 crc kubenswrapper[4772]: I1128 12:15:09.259740 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-t78xp_553d598e-4476-450b-952b-f8269626bfa5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 12:15:15 crc kubenswrapper[4772]: I1128 12:15:15.993945 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:15:15 crc kubenswrapper[4772]: E1128 12:15:15.994711 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:15:19 crc kubenswrapper[4772]: I1128 12:15:19.515753 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f9e491cd-f369-412c-9b41-77844ff3057d/memcached/0.log" Nov 28 12:15:29 crc kubenswrapper[4772]: I1128 12:15:29.996185 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:15:29 crc kubenswrapper[4772]: E1128 12:15:29.996939 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:15:36 crc kubenswrapper[4772]: I1128 12:15:36.491623 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/util/0.log" Nov 28 12:15:36 crc kubenswrapper[4772]: I1128 12:15:36.673516 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/util/0.log" Nov 28 12:15:36 crc kubenswrapper[4772]: I1128 12:15:36.705685 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/pull/0.log" Nov 28 12:15:36 crc kubenswrapper[4772]: I1128 12:15:36.760349 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/pull/0.log" Nov 28 12:15:36 crc kubenswrapper[4772]: I1128 12:15:36.875053 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/pull/0.log" Nov 28 12:15:36 crc kubenswrapper[4772]: I1128 12:15:36.882006 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/util/0.log" Nov 28 12:15:36 crc kubenswrapper[4772]: I1128 12:15:36.934568 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_407a49296837c8c1ba2ba3d7e1a48e4734f20cac0b622a348cb10970b8b48g9_565eb734-9f39-468d-97ee-7d119b15d945/extract/0.log" Nov 28 12:15:37 crc kubenswrapper[4772]: I1128 12:15:37.073110 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-8n59v_c063126d-a9d6-4a2c-96b4-0b0a42a94fff/kube-rbac-proxy/0.log" Nov 28 12:15:37 crc kubenswrapper[4772]: I1128 12:15:37.098398 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-8n59v_c063126d-a9d6-4a2c-96b4-0b0a42a94fff/manager/0.log" Nov 28 12:15:37 crc kubenswrapper[4772]: I1128 12:15:37.865259 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-zkr58_e98df0ac-d8d5-49fd-a331-509b0736bbb1/kube-rbac-proxy/0.log" Nov 28 12:15:37 crc kubenswrapper[4772]: I1128 12:15:37.928160 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-zkr58_e98df0ac-d8d5-49fd-a331-509b0736bbb1/manager/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.036385 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-kwb9r_5278900b-7407-46c5-b420-c5569e508132/kube-rbac-proxy/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.056122 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-kwb9r_5278900b-7407-46c5-b420-c5569e508132/manager/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.113444 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-7spr8_7657168c-6a48-435a-92c3-b93970b60d07/kube-rbac-proxy/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.280108 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-7spr8_7657168c-6a48-435a-92c3-b93970b60d07/manager/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.319156 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-6r566_7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb/kube-rbac-proxy/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.330748 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-6r566_7f96e59b-e8a5-471a-8e43-4ae8edfbc7bb/manager/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.444757 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-qksc2_b0c3e372-422f-46e4-94e3-51ed4b3c0fd0/kube-rbac-proxy/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.519546 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-qksc2_b0c3e372-422f-46e4-94e3-51ed4b3c0fd0/manager/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.605645 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-c9nnk_90486ac7-ac7e-418a-9f2a-5bf934e996ca/kube-rbac-proxy/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.755503 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-tj9m8_c29a1c46-5112-4d85-8f8f-b494575bd428/kube-rbac-proxy/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.758411 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-c9nnk_90486ac7-ac7e-418a-9f2a-5bf934e996ca/manager/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.766688 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-tj9m8_c29a1c46-5112-4d85-8f8f-b494575bd428/manager/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.933210 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-ftntr_f569792f-b95e-4f7a-b58e-22bd27c56dfd/kube-rbac-proxy/0.log" Nov 28 12:15:38 crc kubenswrapper[4772]: I1128 12:15:38.981859 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-ftntr_f569792f-b95e-4f7a-b58e-22bd27c56dfd/manager/0.log" Nov 28 12:15:39 crc kubenswrapper[4772]: I1128 12:15:39.605825 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-2w6xw_d117b1a7-48be-4cc5-928f-b22d31a16b7f/manager/0.log" Nov 28 12:15:39 crc kubenswrapper[4772]: I1128 12:15:39.609214 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-2w6xw_d117b1a7-48be-4cc5-928f-b22d31a16b7f/kube-rbac-proxy/0.log" Nov 28 12:15:39 crc kubenswrapper[4772]: I1128 12:15:39.652056 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-nfb47_2dc36b3d-99ac-4a89-bdc3-309a12cc887e/kube-rbac-proxy/0.log" Nov 28 12:15:39 crc kubenswrapper[4772]: I1128 12:15:39.787586 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-nfb47_2dc36b3d-99ac-4a89-bdc3-309a12cc887e/manager/0.log" Nov 28 12:15:39 crc kubenswrapper[4772]: I1128 12:15:39.814480 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-l8rvz_f67d3c6d-0b62-4162-bfeb-24da441f5edc/kube-rbac-proxy/0.log" Nov 28 12:15:39 crc kubenswrapper[4772]: I1128 12:15:39.874048 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-l8rvz_f67d3c6d-0b62-4162-bfeb-24da441f5edc/manager/0.log" Nov 28 12:15:40 crc kubenswrapper[4772]: I1128 12:15:40.005136 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-sc4xf_a7f0f276-5402-4e33-bd63-f6df7819f966/kube-rbac-proxy/0.log" Nov 28 12:15:40 crc kubenswrapper[4772]: I1128 12:15:40.047785 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-k7z7p_7f63617e-c125-40e3-a273-4180f7d8d45c/kube-rbac-proxy/0.log" Nov 28 12:15:40 crc kubenswrapper[4772]: I1128 12:15:40.093761 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-sc4xf_a7f0f276-5402-4e33-bd63-f6df7819f966/manager/0.log" Nov 28 12:15:40 crc kubenswrapper[4772]: I1128 12:15:40.197634 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-k7z7p_7f63617e-c125-40e3-a273-4180f7d8d45c/manager/0.log" Nov 28 12:15:40 crc kubenswrapper[4772]: I1128 12:15:40.276114 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs_47a38974-64f9-46ba-b4cf-f61c0d3a485e/manager/0.log" Nov 28 12:15:40 crc kubenswrapper[4772]: I1128 12:15:40.284108 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6btvzhs_47a38974-64f9-46ba-b4cf-f61c0d3a485e/kube-rbac-proxy/0.log" Nov 28 12:15:40 crc kubenswrapper[4772]: I1128 12:15:40.512022 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-g4wlz_7cc8caa8-c50f-432d-9d19-8b3f1603d90a/registry-server/0.log" Nov 28 12:15:40 crc kubenswrapper[4772]: I1128 12:15:40.664855 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7f586794b9-rndfk_b4035afc-5f40-4643-8f3c-39a68fe3efa6/operator/0.log" Nov 28 12:15:40 crc kubenswrapper[4772]: I1128 12:15:40.718841 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-jpf2b_e955b059-294d-40ab-b4af-6bbf7c5bb2e6/kube-rbac-proxy/0.log" Nov 28 12:15:40 crc kubenswrapper[4772]: I1128 12:15:40.804310 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-jpf2b_e955b059-294d-40ab-b4af-6bbf7c5bb2e6/manager/0.log" Nov 28 12:15:40 crc kubenswrapper[4772]: I1128 12:15:40.917206 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-2mc7h_2ce66d6e-19b8-41e7-890d-f17f4be5a920/kube-rbac-proxy/0.log" Nov 28 12:15:40 crc kubenswrapper[4772]: I1128 12:15:40.954398 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-2mc7h_2ce66d6e-19b8-41e7-890d-f17f4be5a920/manager/0.log" Nov 28 12:15:41 crc kubenswrapper[4772]: I1128 12:15:41.142750 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ldrjz_6e95de97-8ad3-493a-a98f-5541e23ca701/operator/0.log" Nov 28 12:15:41 crc kubenswrapper[4772]: I1128 12:15:41.191865 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-pqx6v_79791884-38fa-4d4e-ace2-cd02b0df26ab/kube-rbac-proxy/0.log" Nov 28 12:15:41 crc kubenswrapper[4772]: I1128 12:15:41.375653 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-cxg56_9632aabc-46f3-44f3-b6ff-01923cddd5fa/kube-rbac-proxy/0.log" Nov 28 12:15:41 crc kubenswrapper[4772]: I1128 12:15:41.395339 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-pqx6v_79791884-38fa-4d4e-ace2-cd02b0df26ab/manager/0.log" Nov 28 12:15:41 crc kubenswrapper[4772]: I1128 12:15:41.506275 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-cxg56_9632aabc-46f3-44f3-b6ff-01923cddd5fa/manager/0.log" Nov 28 12:15:41 crc kubenswrapper[4772]: I1128 12:15:41.538763 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6fbf799579-db6rg_ed05bf4d-d7d9-40eb-965a-5c866fc76b3c/manager/0.log" Nov 28 12:15:41 crc kubenswrapper[4772]: I1128 12:15:41.580232 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-956cf_7b6bce9b-9e9a-414a-aad7-5a8667c9557d/manager/0.log" Nov 28 12:15:41 crc kubenswrapper[4772]: I1128 12:15:41.614000 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-956cf_7b6bce9b-9e9a-414a-aad7-5a8667c9557d/kube-rbac-proxy/0.log" Nov 28 12:15:41 crc kubenswrapper[4772]: I1128 12:15:41.749006 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-cfl4r_9d009ed5-21d1-4f1c-b1ec-bef39cf8a265/kube-rbac-proxy/0.log" Nov 28 12:15:41 crc kubenswrapper[4772]: I1128 12:15:41.761573 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-cfl4r_9d009ed5-21d1-4f1c-b1ec-bef39cf8a265/manager/0.log" Nov 28 12:15:44 crc kubenswrapper[4772]: I1128 12:15:44.994688 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:15:44 crc kubenswrapper[4772]: E1128 12:15:44.994952 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfsjk_openshift-machine-config-operator(8e4e32c1-8c60-4972-ae38-a20020b374fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" Nov 28 12:15:48 crc kubenswrapper[4772]: I1128 12:15:48.265074 4772 scope.go:117] "RemoveContainer" containerID="b854aca9587ea5ff300f348e0a85d24629920c328b3b6510fc8e8cd42323cadc" Nov 28 12:15:59 crc kubenswrapper[4772]: I1128 12:15:59.995348 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:16:00 crc kubenswrapper[4772]: I1128 12:16:00.419066 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"16db3c6e51ec9fa8800b07089a7130627bed271fe59903f0a6b34025d207e7ff"} Nov 28 12:16:02 crc kubenswrapper[4772]: I1128 12:16:02.216449 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-w8l8k_8b29fa00-3205-4f7c-8f5f-671c7921029b/control-plane-machine-set-operator/0.log" Nov 28 12:16:02 crc kubenswrapper[4772]: I1128 12:16:02.335336 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4ct44_d593cf3a-ced7-4f3a-a15a-10c3309a2ee3/kube-rbac-proxy/0.log" Nov 28 12:16:02 crc kubenswrapper[4772]: I1128 12:16:02.374609 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4ct44_d593cf3a-ced7-4f3a-a15a-10c3309a2ee3/machine-api-operator/0.log" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.569559 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kd4gj"] Nov 28 12:16:06 crc kubenswrapper[4772]: E1128 12:16:06.570324 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25eb67f0-8a4d-4333-ab0f-e6461b6e98f1" containerName="collect-profiles" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.570336 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="25eb67f0-8a4d-4333-ab0f-e6461b6e98f1" containerName="collect-profiles" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.574137 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="25eb67f0-8a4d-4333-ab0f-e6461b6e98f1" containerName="collect-profiles" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.575549 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.593677 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kd4gj"] Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.693227 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-utilities\") pod \"redhat-operators-kd4gj\" (UID: \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\") " pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.693298 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-catalog-content\") pod \"redhat-operators-kd4gj\" (UID: \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\") " pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.693414 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ns5w\" (UniqueName: \"kubernetes.io/projected/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-kube-api-access-5ns5w\") pod \"redhat-operators-kd4gj\" (UID: \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\") " pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.794731 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ns5w\" (UniqueName: \"kubernetes.io/projected/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-kube-api-access-5ns5w\") pod \"redhat-operators-kd4gj\" (UID: \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\") " pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.794840 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-utilities\") pod \"redhat-operators-kd4gj\" (UID: \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\") " pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.794886 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-catalog-content\") pod \"redhat-operators-kd4gj\" (UID: \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\") " pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.795377 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-utilities\") pod \"redhat-operators-kd4gj\" (UID: \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\") " pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.795429 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-catalog-content\") pod \"redhat-operators-kd4gj\" (UID: \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\") " pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.815339 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ns5w\" (UniqueName: \"kubernetes.io/projected/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-kube-api-access-5ns5w\") pod \"redhat-operators-kd4gj\" (UID: \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\") " pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:06 crc kubenswrapper[4772]: I1128 12:16:06.892238 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:07 crc kubenswrapper[4772]: I1128 12:16:07.916451 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kd4gj"] Nov 28 12:16:08 crc kubenswrapper[4772]: I1128 12:16:08.494293 4772 generic.go:334] "Generic (PLEG): container finished" podID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" containerID="5f2fd71c486c2a3f4bce23ce62728095aef9d60c37693513e8c56342dcf188be" exitCode=0 Nov 28 12:16:08 crc kubenswrapper[4772]: I1128 12:16:08.494516 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd4gj" event={"ID":"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b","Type":"ContainerDied","Data":"5f2fd71c486c2a3f4bce23ce62728095aef9d60c37693513e8c56342dcf188be"} Nov 28 12:16:08 crc kubenswrapper[4772]: I1128 12:16:08.494625 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd4gj" event={"ID":"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b","Type":"ContainerStarted","Data":"7c93e69364390a0c54511c391dc82827a702b647e579568cc688d76688715d36"} Nov 28 12:16:10 crc kubenswrapper[4772]: I1128 12:16:10.514979 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd4gj" event={"ID":"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b","Type":"ContainerStarted","Data":"cbeb3fc7f0674bd7433b42bb5f62df0d25fe01a07db68f32f3082963f4fc31f6"} Nov 28 12:16:11 crc kubenswrapper[4772]: I1128 12:16:11.528085 4772 generic.go:334] "Generic (PLEG): container finished" podID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" containerID="cbeb3fc7f0674bd7433b42bb5f62df0d25fe01a07db68f32f3082963f4fc31f6" exitCode=0 Nov 28 12:16:11 crc kubenswrapper[4772]: I1128 12:16:11.528147 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd4gj" event={"ID":"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b","Type":"ContainerDied","Data":"cbeb3fc7f0674bd7433b42bb5f62df0d25fe01a07db68f32f3082963f4fc31f6"} Nov 28 12:16:12 crc kubenswrapper[4772]: I1128 12:16:12.552418 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd4gj" event={"ID":"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b","Type":"ContainerStarted","Data":"4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2"} Nov 28 12:16:16 crc kubenswrapper[4772]: I1128 12:16:16.893314 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:16 crc kubenswrapper[4772]: I1128 12:16:16.893783 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:17 crc kubenswrapper[4772]: I1128 12:16:17.002962 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-hxhpx_9aab53d9-b682-4d53-8d5e-8fd0498411e6/cert-manager-controller/0.log" Nov 28 12:16:17 crc kubenswrapper[4772]: I1128 12:16:17.082232 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-tmmls_ee885bfb-fd81-472f-8de7-2a64130e0141/cert-manager-cainjector/0.log" Nov 28 12:16:17 crc kubenswrapper[4772]: I1128 12:16:17.198835 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-cq45f_e3fe3585-4df1-4fdc-ab0f-7c9c4ed0e6de/cert-manager-webhook/0.log" Nov 28 12:16:17 crc kubenswrapper[4772]: I1128 12:16:17.939464 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kd4gj" podUID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" containerName="registry-server" probeResult="failure" output=< Nov 28 12:16:17 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 28 12:16:17 crc kubenswrapper[4772]: > Nov 28 12:16:26 crc kubenswrapper[4772]: I1128 12:16:26.949287 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:26 crc kubenswrapper[4772]: I1128 12:16:26.979044 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kd4gj" podStartSLOduration=17.539021778 podStartE2EDuration="20.97902533s" podCreationTimestamp="2025-11-28 12:16:06 +0000 UTC" firstStartedPulling="2025-11-28 12:16:08.496950825 +0000 UTC m=+4166.820194052" lastFinishedPulling="2025-11-28 12:16:11.936954377 +0000 UTC m=+4170.260197604" observedRunningTime="2025-11-28 12:16:12.573811868 +0000 UTC m=+4170.897055105" watchObservedRunningTime="2025-11-28 12:16:26.97902533 +0000 UTC m=+4185.302268557" Nov 28 12:16:27 crc kubenswrapper[4772]: I1128 12:16:27.000227 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:28 crc kubenswrapper[4772]: I1128 12:16:28.084053 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kd4gj"] Nov 28 12:16:28 crc kubenswrapper[4772]: I1128 12:16:28.693260 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kd4gj" podUID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" containerName="registry-server" containerID="cri-o://4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2" gracePeriod=2 Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.206163 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.238699 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-catalog-content\") pod \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\" (UID: \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\") " Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.238907 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-utilities\") pod \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\" (UID: \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\") " Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.238959 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ns5w\" (UniqueName: \"kubernetes.io/projected/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-kube-api-access-5ns5w\") pod \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\" (UID: \"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b\") " Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.239706 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-utilities" (OuterVolumeSpecName: "utilities") pod "102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" (UID: "102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.251294 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-kube-api-access-5ns5w" (OuterVolumeSpecName: "kube-api-access-5ns5w") pod "102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" (UID: "102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b"). InnerVolumeSpecName "kube-api-access-5ns5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.334858 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" (UID: "102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.340815 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.340843 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ns5w\" (UniqueName: \"kubernetes.io/projected/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-kube-api-access-5ns5w\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.340855 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.730927 4772 generic.go:334] "Generic (PLEG): container finished" podID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" containerID="4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2" exitCode=0 Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.730988 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd4gj" event={"ID":"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b","Type":"ContainerDied","Data":"4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2"} Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.731026 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd4gj" event={"ID":"102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b","Type":"ContainerDied","Data":"7c93e69364390a0c54511c391dc82827a702b647e579568cc688d76688715d36"} Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.731057 4772 scope.go:117] "RemoveContainer" containerID="4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.731122 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kd4gj" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.756805 4772 scope.go:117] "RemoveContainer" containerID="cbeb3fc7f0674bd7433b42bb5f62df0d25fe01a07db68f32f3082963f4fc31f6" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.779487 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kd4gj"] Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.787792 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kd4gj"] Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.805344 4772 scope.go:117] "RemoveContainer" containerID="5f2fd71c486c2a3f4bce23ce62728095aef9d60c37693513e8c56342dcf188be" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.832522 4772 scope.go:117] "RemoveContainer" containerID="4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2" Nov 28 12:16:29 crc kubenswrapper[4772]: E1128 12:16:29.834692 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2\": container with ID starting with 4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2 not found: ID does not exist" containerID="4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.834728 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2"} err="failed to get container status \"4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2\": rpc error: code = NotFound desc = could not find container \"4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2\": container with ID starting with 4e9b3fa2f5777e421866fac08c75acf96218699db520097ad2bb3fb2d7437ed2 not found: ID does not exist" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.834751 4772 scope.go:117] "RemoveContainer" containerID="cbeb3fc7f0674bd7433b42bb5f62df0d25fe01a07db68f32f3082963f4fc31f6" Nov 28 12:16:29 crc kubenswrapper[4772]: E1128 12:16:29.835250 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbeb3fc7f0674bd7433b42bb5f62df0d25fe01a07db68f32f3082963f4fc31f6\": container with ID starting with cbeb3fc7f0674bd7433b42bb5f62df0d25fe01a07db68f32f3082963f4fc31f6 not found: ID does not exist" containerID="cbeb3fc7f0674bd7433b42bb5f62df0d25fe01a07db68f32f3082963f4fc31f6" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.835273 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbeb3fc7f0674bd7433b42bb5f62df0d25fe01a07db68f32f3082963f4fc31f6"} err="failed to get container status \"cbeb3fc7f0674bd7433b42bb5f62df0d25fe01a07db68f32f3082963f4fc31f6\": rpc error: code = NotFound desc = could not find container \"cbeb3fc7f0674bd7433b42bb5f62df0d25fe01a07db68f32f3082963f4fc31f6\": container with ID starting with cbeb3fc7f0674bd7433b42bb5f62df0d25fe01a07db68f32f3082963f4fc31f6 not found: ID does not exist" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.835287 4772 scope.go:117] "RemoveContainer" containerID="5f2fd71c486c2a3f4bce23ce62728095aef9d60c37693513e8c56342dcf188be" Nov 28 12:16:29 crc kubenswrapper[4772]: E1128 12:16:29.835506 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2fd71c486c2a3f4bce23ce62728095aef9d60c37693513e8c56342dcf188be\": container with ID starting with 5f2fd71c486c2a3f4bce23ce62728095aef9d60c37693513e8c56342dcf188be not found: ID does not exist" containerID="5f2fd71c486c2a3f4bce23ce62728095aef9d60c37693513e8c56342dcf188be" Nov 28 12:16:29 crc kubenswrapper[4772]: I1128 12:16:29.835538 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2fd71c486c2a3f4bce23ce62728095aef9d60c37693513e8c56342dcf188be"} err="failed to get container status \"5f2fd71c486c2a3f4bce23ce62728095aef9d60c37693513e8c56342dcf188be\": rpc error: code = NotFound desc = could not find container \"5f2fd71c486c2a3f4bce23ce62728095aef9d60c37693513e8c56342dcf188be\": container with ID starting with 5f2fd71c486c2a3f4bce23ce62728095aef9d60c37693513e8c56342dcf188be not found: ID does not exist" Nov 28 12:16:30 crc kubenswrapper[4772]: I1128 12:16:30.005249 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" path="/var/lib/kubelet/pods/102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b/volumes" Nov 28 12:16:31 crc kubenswrapper[4772]: I1128 12:16:31.855837 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-7glrw_dca0040a-69ac-4ff1-aefa-64b17329697f/nmstate-console-plugin/0.log" Nov 28 12:16:32 crc kubenswrapper[4772]: I1128 12:16:32.026676 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qqs48_1bec0d53-d3cb-497c-8db1-646796c7194c/nmstate-handler/0.log" Nov 28 12:16:32 crc kubenswrapper[4772]: I1128 12:16:32.067564 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-zj2qc_ee79d6cb-a211-42c9-b669-9e202376834a/kube-rbac-proxy/0.log" Nov 28 12:16:32 crc kubenswrapper[4772]: I1128 12:16:32.151248 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-zj2qc_ee79d6cb-a211-42c9-b669-9e202376834a/nmstate-metrics/0.log" Nov 28 12:16:32 crc kubenswrapper[4772]: I1128 12:16:32.200346 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-2mwrb_e1c70388-5a93-4972-ab1f-24e87ab8498e/nmstate-operator/0.log" Nov 28 12:16:32 crc kubenswrapper[4772]: I1128 12:16:32.335247 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-h6rxp_c53fcd9d-4c6c-4829-8caa-3cddd7c60442/nmstate-webhook/0.log" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.102195 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pgzmk"] Nov 28 12:16:46 crc kubenswrapper[4772]: E1128 12:16:46.103148 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" containerName="extract-content" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.103162 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" containerName="extract-content" Nov 28 12:16:46 crc kubenswrapper[4772]: E1128 12:16:46.103175 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" containerName="extract-utilities" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.103183 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" containerName="extract-utilities" Nov 28 12:16:46 crc kubenswrapper[4772]: E1128 12:16:46.103211 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" containerName="registry-server" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.103217 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" containerName="registry-server" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.103413 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="102d7ffa-7f1a-4448-bb2b-c9a74ca4e65b" containerName="registry-server" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.104910 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.116140 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9272db-56cd-4c4d-a71a-3c689477cc32-catalog-content\") pod \"community-operators-pgzmk\" (UID: \"7e9272db-56cd-4c4d-a71a-3c689477cc32\") " pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.116271 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9272db-56cd-4c4d-a71a-3c689477cc32-utilities\") pod \"community-operators-pgzmk\" (UID: \"7e9272db-56cd-4c4d-a71a-3c689477cc32\") " pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.116303 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktqqd\" (UniqueName: \"kubernetes.io/projected/7e9272db-56cd-4c4d-a71a-3c689477cc32-kube-api-access-ktqqd\") pod \"community-operators-pgzmk\" (UID: \"7e9272db-56cd-4c4d-a71a-3c689477cc32\") " pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.121794 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pgzmk"] Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.218243 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9272db-56cd-4c4d-a71a-3c689477cc32-utilities\") pod \"community-operators-pgzmk\" (UID: \"7e9272db-56cd-4c4d-a71a-3c689477cc32\") " pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.218333 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktqqd\" (UniqueName: \"kubernetes.io/projected/7e9272db-56cd-4c4d-a71a-3c689477cc32-kube-api-access-ktqqd\") pod \"community-operators-pgzmk\" (UID: \"7e9272db-56cd-4c4d-a71a-3c689477cc32\") " pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.218403 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9272db-56cd-4c4d-a71a-3c689477cc32-catalog-content\") pod \"community-operators-pgzmk\" (UID: \"7e9272db-56cd-4c4d-a71a-3c689477cc32\") " pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.219194 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9272db-56cd-4c4d-a71a-3c689477cc32-catalog-content\") pod \"community-operators-pgzmk\" (UID: \"7e9272db-56cd-4c4d-a71a-3c689477cc32\") " pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.219595 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9272db-56cd-4c4d-a71a-3c689477cc32-utilities\") pod \"community-operators-pgzmk\" (UID: \"7e9272db-56cd-4c4d-a71a-3c689477cc32\") " pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.247190 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktqqd\" (UniqueName: \"kubernetes.io/projected/7e9272db-56cd-4c4d-a71a-3c689477cc32-kube-api-access-ktqqd\") pod \"community-operators-pgzmk\" (UID: \"7e9272db-56cd-4c4d-a71a-3c689477cc32\") " pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.443876 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:46 crc kubenswrapper[4772]: I1128 12:16:46.980051 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pgzmk"] Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.502375 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lfn48"] Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.505056 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.517298 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfn48"] Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.551694 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29c43aff-8fd8-466f-a957-80f7956fcecb-utilities\") pod \"redhat-marketplace-lfn48\" (UID: \"29c43aff-8fd8-466f-a957-80f7956fcecb\") " pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.551791 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcwwm\" (UniqueName: \"kubernetes.io/projected/29c43aff-8fd8-466f-a957-80f7956fcecb-kube-api-access-pcwwm\") pod \"redhat-marketplace-lfn48\" (UID: \"29c43aff-8fd8-466f-a957-80f7956fcecb\") " pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.552036 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29c43aff-8fd8-466f-a957-80f7956fcecb-catalog-content\") pod \"redhat-marketplace-lfn48\" (UID: \"29c43aff-8fd8-466f-a957-80f7956fcecb\") " pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.653058 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29c43aff-8fd8-466f-a957-80f7956fcecb-catalog-content\") pod \"redhat-marketplace-lfn48\" (UID: \"29c43aff-8fd8-466f-a957-80f7956fcecb\") " pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.653104 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29c43aff-8fd8-466f-a957-80f7956fcecb-utilities\") pod \"redhat-marketplace-lfn48\" (UID: \"29c43aff-8fd8-466f-a957-80f7956fcecb\") " pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.653181 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcwwm\" (UniqueName: \"kubernetes.io/projected/29c43aff-8fd8-466f-a957-80f7956fcecb-kube-api-access-pcwwm\") pod \"redhat-marketplace-lfn48\" (UID: \"29c43aff-8fd8-466f-a957-80f7956fcecb\") " pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.653965 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29c43aff-8fd8-466f-a957-80f7956fcecb-utilities\") pod \"redhat-marketplace-lfn48\" (UID: \"29c43aff-8fd8-466f-a957-80f7956fcecb\") " pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.653968 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29c43aff-8fd8-466f-a957-80f7956fcecb-catalog-content\") pod \"redhat-marketplace-lfn48\" (UID: \"29c43aff-8fd8-466f-a957-80f7956fcecb\") " pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.688010 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcwwm\" (UniqueName: \"kubernetes.io/projected/29c43aff-8fd8-466f-a957-80f7956fcecb-kube-api-access-pcwwm\") pod \"redhat-marketplace-lfn48\" (UID: \"29c43aff-8fd8-466f-a957-80f7956fcecb\") " pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.833306 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.916446 4772 generic.go:334] "Generic (PLEG): container finished" podID="7e9272db-56cd-4c4d-a71a-3c689477cc32" containerID="1fdc476016f25445df10471df2dd3aa65b370ba73c73373ba2ceece82611d1f9" exitCode=0 Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.916490 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgzmk" event={"ID":"7e9272db-56cd-4c4d-a71a-3c689477cc32","Type":"ContainerDied","Data":"1fdc476016f25445df10471df2dd3aa65b370ba73c73373ba2ceece82611d1f9"} Nov 28 12:16:47 crc kubenswrapper[4772]: I1128 12:16:47.916514 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgzmk" event={"ID":"7e9272db-56cd-4c4d-a71a-3c689477cc32","Type":"ContainerStarted","Data":"f7d02d5df757d0e80c10d85e91209b9088cb971e7e1e658dee3a287e3c36a991"} Nov 28 12:16:48 crc kubenswrapper[4772]: I1128 12:16:48.330430 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfn48"] Nov 28 12:16:48 crc kubenswrapper[4772]: I1128 12:16:48.925278 4772 generic.go:334] "Generic (PLEG): container finished" podID="29c43aff-8fd8-466f-a957-80f7956fcecb" containerID="baef195a8947fdfbea3754d76206fc145cc23ba650c9b7efbd2c326c545746a5" exitCode=0 Nov 28 12:16:48 crc kubenswrapper[4772]: I1128 12:16:48.925401 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfn48" event={"ID":"29c43aff-8fd8-466f-a957-80f7956fcecb","Type":"ContainerDied","Data":"baef195a8947fdfbea3754d76206fc145cc23ba650c9b7efbd2c326c545746a5"} Nov 28 12:16:48 crc kubenswrapper[4772]: I1128 12:16:48.925651 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfn48" event={"ID":"29c43aff-8fd8-466f-a957-80f7956fcecb","Type":"ContainerStarted","Data":"5adebb5df027451e9600b2e397851337f135e4cc1918eb05d6b5420453bdc936"} Nov 28 12:16:48 crc kubenswrapper[4772]: I1128 12:16:48.928066 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgzmk" event={"ID":"7e9272db-56cd-4c4d-a71a-3c689477cc32","Type":"ContainerStarted","Data":"f1f6c36bd00a72286987ae30beb40a170e54db3e5a4c4f8b8028abcc12ee730b"} Nov 28 12:16:49 crc kubenswrapper[4772]: I1128 12:16:49.035153 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-pndk9_e2ea9b50-b6aa-4700-b238-a66b47d5d070/kube-rbac-proxy/0.log" Nov 28 12:16:49 crc kubenswrapper[4772]: I1128 12:16:49.197447 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-pndk9_e2ea9b50-b6aa-4700-b238-a66b47d5d070/controller/0.log" Nov 28 12:16:49 crc kubenswrapper[4772]: I1128 12:16:49.318742 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-frr-files/0.log" Nov 28 12:16:49 crc kubenswrapper[4772]: I1128 12:16:49.940293 4772 generic.go:334] "Generic (PLEG): container finished" podID="7e9272db-56cd-4c4d-a71a-3c689477cc32" containerID="f1f6c36bd00a72286987ae30beb40a170e54db3e5a4c4f8b8028abcc12ee730b" exitCode=0 Nov 28 12:16:49 crc kubenswrapper[4772]: I1128 12:16:49.940360 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgzmk" event={"ID":"7e9272db-56cd-4c4d-a71a-3c689477cc32","Type":"ContainerDied","Data":"f1f6c36bd00a72286987ae30beb40a170e54db3e5a4c4f8b8028abcc12ee730b"} Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.035072 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-reloader/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.040313 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-frr-files/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.096001 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-reloader/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.111720 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-metrics/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.288361 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-frr-files/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.324934 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-metrics/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.343448 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-reloader/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.350621 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-metrics/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.521035 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-reloader/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.533024 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-frr-files/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.535797 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/cp-metrics/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.566206 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/controller/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.744883 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/kube-rbac-proxy/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.768478 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/frr-metrics/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.865315 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/kube-rbac-proxy-frr/0.log" Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.950353 4772 generic.go:334] "Generic (PLEG): container finished" podID="29c43aff-8fd8-466f-a957-80f7956fcecb" containerID="6936cad939c469e5e88dcd3aa75dda577d93ad34d1862f17dcd1bdc9135da130" exitCode=0 Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.952310 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfn48" event={"ID":"29c43aff-8fd8-466f-a957-80f7956fcecb","Type":"ContainerDied","Data":"6936cad939c469e5e88dcd3aa75dda577d93ad34d1862f17dcd1bdc9135da130"} Nov 28 12:16:50 crc kubenswrapper[4772]: I1128 12:16:50.957546 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgzmk" event={"ID":"7e9272db-56cd-4c4d-a71a-3c689477cc32","Type":"ContainerStarted","Data":"17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339"} Nov 28 12:16:51 crc kubenswrapper[4772]: I1128 12:16:51.001096 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pgzmk" podStartSLOduration=2.330292318 podStartE2EDuration="5.001073254s" podCreationTimestamp="2025-11-28 12:16:46 +0000 UTC" firstStartedPulling="2025-11-28 12:16:47.917859813 +0000 UTC m=+4206.241103040" lastFinishedPulling="2025-11-28 12:16:50.588640749 +0000 UTC m=+4208.911883976" observedRunningTime="2025-11-28 12:16:50.999831972 +0000 UTC m=+4209.323075199" watchObservedRunningTime="2025-11-28 12:16:51.001073254 +0000 UTC m=+4209.324316491" Nov 28 12:16:51 crc kubenswrapper[4772]: I1128 12:16:51.004199 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/reloader/0.log" Nov 28 12:16:51 crc kubenswrapper[4772]: I1128 12:16:51.085957 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-h5pwr_755f7720-2965-444a-887c-b4ab39b4160f/frr-k8s-webhook-server/0.log" Nov 28 12:16:51 crc kubenswrapper[4772]: I1128 12:16:51.682073 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-656496c9ff-4ql2l_b9e32536-fc25-4e7d-8361-41e61fd188f4/manager/0.log" Nov 28 12:16:51 crc kubenswrapper[4772]: I1128 12:16:51.971071 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfn48" event={"ID":"29c43aff-8fd8-466f-a957-80f7956fcecb","Type":"ContainerStarted","Data":"7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da"} Nov 28 12:16:52 crc kubenswrapper[4772]: I1128 12:16:52.013389 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lfn48" podStartSLOduration=2.503371366 podStartE2EDuration="5.013348296s" podCreationTimestamp="2025-11-28 12:16:47 +0000 UTC" firstStartedPulling="2025-11-28 12:16:48.926958623 +0000 UTC m=+4207.250201850" lastFinishedPulling="2025-11-28 12:16:51.436935533 +0000 UTC m=+4209.760178780" observedRunningTime="2025-11-28 12:16:51.987786537 +0000 UTC m=+4210.311029754" watchObservedRunningTime="2025-11-28 12:16:52.013348296 +0000 UTC m=+4210.336591523" Nov 28 12:16:52 crc kubenswrapper[4772]: I1128 12:16:52.053951 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b6fc5bcd6-krdrb_2c08f594-245d-4b59-9890-c8277ce4229f/webhook-server/0.log" Nov 28 12:16:52 crc kubenswrapper[4772]: I1128 12:16:52.133517 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c5wb9_8566dd6d-6382-4a11-b2f6-cbd8aa2d75ba/frr/0.log" Nov 28 12:16:52 crc kubenswrapper[4772]: I1128 12:16:52.246864 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-c9tgf_175603eb-4244-46dc-98a4-2f8426488c48/kube-rbac-proxy/0.log" Nov 28 12:16:52 crc kubenswrapper[4772]: I1128 12:16:52.482583 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-c9tgf_175603eb-4244-46dc-98a4-2f8426488c48/speaker/0.log" Nov 28 12:16:56 crc kubenswrapper[4772]: I1128 12:16:56.446895 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:56 crc kubenswrapper[4772]: I1128 12:16:56.448556 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:56 crc kubenswrapper[4772]: I1128 12:16:56.523813 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:57 crc kubenswrapper[4772]: I1128 12:16:57.077338 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:57 crc kubenswrapper[4772]: I1128 12:16:57.125541 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pgzmk"] Nov 28 12:16:57 crc kubenswrapper[4772]: I1128 12:16:57.834310 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:57 crc kubenswrapper[4772]: I1128 12:16:57.834431 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:57 crc kubenswrapper[4772]: I1128 12:16:57.923390 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:58 crc kubenswrapper[4772]: I1128 12:16:58.096297 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:16:59 crc kubenswrapper[4772]: I1128 12:16:59.046463 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pgzmk" podUID="7e9272db-56cd-4c4d-a71a-3c689477cc32" containerName="registry-server" containerID="cri-o://17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339" gracePeriod=2 Nov 28 12:16:59 crc kubenswrapper[4772]: I1128 12:16:59.644786 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:16:59 crc kubenswrapper[4772]: I1128 12:16:59.784127 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktqqd\" (UniqueName: \"kubernetes.io/projected/7e9272db-56cd-4c4d-a71a-3c689477cc32-kube-api-access-ktqqd\") pod \"7e9272db-56cd-4c4d-a71a-3c689477cc32\" (UID: \"7e9272db-56cd-4c4d-a71a-3c689477cc32\") " Nov 28 12:16:59 crc kubenswrapper[4772]: I1128 12:16:59.784181 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9272db-56cd-4c4d-a71a-3c689477cc32-catalog-content\") pod \"7e9272db-56cd-4c4d-a71a-3c689477cc32\" (UID: \"7e9272db-56cd-4c4d-a71a-3c689477cc32\") " Nov 28 12:16:59 crc kubenswrapper[4772]: I1128 12:16:59.784234 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9272db-56cd-4c4d-a71a-3c689477cc32-utilities\") pod \"7e9272db-56cd-4c4d-a71a-3c689477cc32\" (UID: \"7e9272db-56cd-4c4d-a71a-3c689477cc32\") " Nov 28 12:16:59 crc kubenswrapper[4772]: I1128 12:16:59.785553 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e9272db-56cd-4c4d-a71a-3c689477cc32-utilities" (OuterVolumeSpecName: "utilities") pod "7e9272db-56cd-4c4d-a71a-3c689477cc32" (UID: "7e9272db-56cd-4c4d-a71a-3c689477cc32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:59 crc kubenswrapper[4772]: I1128 12:16:59.810546 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9272db-56cd-4c4d-a71a-3c689477cc32-kube-api-access-ktqqd" (OuterVolumeSpecName: "kube-api-access-ktqqd") pod "7e9272db-56cd-4c4d-a71a-3c689477cc32" (UID: "7e9272db-56cd-4c4d-a71a-3c689477cc32"). InnerVolumeSpecName "kube-api-access-ktqqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:59 crc kubenswrapper[4772]: I1128 12:16:59.846453 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e9272db-56cd-4c4d-a71a-3c689477cc32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e9272db-56cd-4c4d-a71a-3c689477cc32" (UID: "7e9272db-56cd-4c4d-a71a-3c689477cc32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:59 crc kubenswrapper[4772]: I1128 12:16:59.886496 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktqqd\" (UniqueName: \"kubernetes.io/projected/7e9272db-56cd-4c4d-a71a-3c689477cc32-kube-api-access-ktqqd\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:59 crc kubenswrapper[4772]: I1128 12:16:59.886531 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9272db-56cd-4c4d-a71a-3c689477cc32-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:59 crc kubenswrapper[4772]: I1128 12:16:59.886544 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9272db-56cd-4c4d-a71a-3c689477cc32-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.059539 4772 generic.go:334] "Generic (PLEG): container finished" podID="7e9272db-56cd-4c4d-a71a-3c689477cc32" containerID="17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339" exitCode=0 Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.059585 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgzmk" event={"ID":"7e9272db-56cd-4c4d-a71a-3c689477cc32","Type":"ContainerDied","Data":"17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339"} Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.059639 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgzmk" event={"ID":"7e9272db-56cd-4c4d-a71a-3c689477cc32","Type":"ContainerDied","Data":"f7d02d5df757d0e80c10d85e91209b9088cb971e7e1e658dee3a287e3c36a991"} Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.059630 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgzmk" Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.059716 4772 scope.go:117] "RemoveContainer" containerID="17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339" Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.090824 4772 scope.go:117] "RemoveContainer" containerID="f1f6c36bd00a72286987ae30beb40a170e54db3e5a4c4f8b8028abcc12ee730b" Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.093711 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pgzmk"] Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.113325 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pgzmk"] Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.123577 4772 scope.go:117] "RemoveContainer" containerID="1fdc476016f25445df10471df2dd3aa65b370ba73c73373ba2ceece82611d1f9" Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.181118 4772 scope.go:117] "RemoveContainer" containerID="17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339" Nov 28 12:17:00 crc kubenswrapper[4772]: E1128 12:17:00.181744 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339\": container with ID starting with 17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339 not found: ID does not exist" containerID="17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339" Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.181793 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339"} err="failed to get container status \"17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339\": rpc error: code = NotFound desc = could not find container \"17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339\": container with ID starting with 17f1417b59bf537268105c5523c6a87d4e8473faaa66db42515e028b431fa339 not found: ID does not exist" Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.181821 4772 scope.go:117] "RemoveContainer" containerID="f1f6c36bd00a72286987ae30beb40a170e54db3e5a4c4f8b8028abcc12ee730b" Nov 28 12:17:00 crc kubenswrapper[4772]: E1128 12:17:00.182230 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1f6c36bd00a72286987ae30beb40a170e54db3e5a4c4f8b8028abcc12ee730b\": container with ID starting with f1f6c36bd00a72286987ae30beb40a170e54db3e5a4c4f8b8028abcc12ee730b not found: ID does not exist" containerID="f1f6c36bd00a72286987ae30beb40a170e54db3e5a4c4f8b8028abcc12ee730b" Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.182264 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1f6c36bd00a72286987ae30beb40a170e54db3e5a4c4f8b8028abcc12ee730b"} err="failed to get container status \"f1f6c36bd00a72286987ae30beb40a170e54db3e5a4c4f8b8028abcc12ee730b\": rpc error: code = NotFound desc = could not find container \"f1f6c36bd00a72286987ae30beb40a170e54db3e5a4c4f8b8028abcc12ee730b\": container with ID starting with f1f6c36bd00a72286987ae30beb40a170e54db3e5a4c4f8b8028abcc12ee730b not found: ID does not exist" Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.182284 4772 scope.go:117] "RemoveContainer" containerID="1fdc476016f25445df10471df2dd3aa65b370ba73c73373ba2ceece82611d1f9" Nov 28 12:17:00 crc kubenswrapper[4772]: E1128 12:17:00.182915 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fdc476016f25445df10471df2dd3aa65b370ba73c73373ba2ceece82611d1f9\": container with ID starting with 1fdc476016f25445df10471df2dd3aa65b370ba73c73373ba2ceece82611d1f9 not found: ID does not exist" containerID="1fdc476016f25445df10471df2dd3aa65b370ba73c73373ba2ceece82611d1f9" Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.182941 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fdc476016f25445df10471df2dd3aa65b370ba73c73373ba2ceece82611d1f9"} err="failed to get container status \"1fdc476016f25445df10471df2dd3aa65b370ba73c73373ba2ceece82611d1f9\": rpc error: code = NotFound desc = could not find container \"1fdc476016f25445df10471df2dd3aa65b370ba73c73373ba2ceece82611d1f9\": container with ID starting with 1fdc476016f25445df10471df2dd3aa65b370ba73c73373ba2ceece82611d1f9 not found: ID does not exist" Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.965445 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfn48"] Nov 28 12:17:00 crc kubenswrapper[4772]: I1128 12:17:00.966034 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lfn48" podUID="29c43aff-8fd8-466f-a957-80f7956fcecb" containerName="registry-server" containerID="cri-o://7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da" gracePeriod=2 Nov 28 12:17:01 crc kubenswrapper[4772]: I1128 12:17:01.447809 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:17:01 crc kubenswrapper[4772]: I1128 12:17:01.518593 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29c43aff-8fd8-466f-a957-80f7956fcecb-utilities\") pod \"29c43aff-8fd8-466f-a957-80f7956fcecb\" (UID: \"29c43aff-8fd8-466f-a957-80f7956fcecb\") " Nov 28 12:17:01 crc kubenswrapper[4772]: I1128 12:17:01.518696 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcwwm\" (UniqueName: \"kubernetes.io/projected/29c43aff-8fd8-466f-a957-80f7956fcecb-kube-api-access-pcwwm\") pod \"29c43aff-8fd8-466f-a957-80f7956fcecb\" (UID: \"29c43aff-8fd8-466f-a957-80f7956fcecb\") " Nov 28 12:17:01 crc kubenswrapper[4772]: I1128 12:17:01.518825 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29c43aff-8fd8-466f-a957-80f7956fcecb-catalog-content\") pod \"29c43aff-8fd8-466f-a957-80f7956fcecb\" (UID: \"29c43aff-8fd8-466f-a957-80f7956fcecb\") " Nov 28 12:17:01 crc kubenswrapper[4772]: I1128 12:17:01.519511 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29c43aff-8fd8-466f-a957-80f7956fcecb-utilities" (OuterVolumeSpecName: "utilities") pod "29c43aff-8fd8-466f-a957-80f7956fcecb" (UID: "29c43aff-8fd8-466f-a957-80f7956fcecb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:17:01 crc kubenswrapper[4772]: I1128 12:17:01.519852 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29c43aff-8fd8-466f-a957-80f7956fcecb-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:01 crc kubenswrapper[4772]: I1128 12:17:01.526559 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c43aff-8fd8-466f-a957-80f7956fcecb-kube-api-access-pcwwm" (OuterVolumeSpecName: "kube-api-access-pcwwm") pod "29c43aff-8fd8-466f-a957-80f7956fcecb" (UID: "29c43aff-8fd8-466f-a957-80f7956fcecb"). InnerVolumeSpecName "kube-api-access-pcwwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:17:01 crc kubenswrapper[4772]: I1128 12:17:01.536336 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29c43aff-8fd8-466f-a957-80f7956fcecb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29c43aff-8fd8-466f-a957-80f7956fcecb" (UID: "29c43aff-8fd8-466f-a957-80f7956fcecb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:17:01 crc kubenswrapper[4772]: I1128 12:17:01.621225 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcwwm\" (UniqueName: \"kubernetes.io/projected/29c43aff-8fd8-466f-a957-80f7956fcecb-kube-api-access-pcwwm\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:01 crc kubenswrapper[4772]: I1128 12:17:01.621276 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29c43aff-8fd8-466f-a957-80f7956fcecb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.006268 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e9272db-56cd-4c4d-a71a-3c689477cc32" path="/var/lib/kubelet/pods/7e9272db-56cd-4c4d-a71a-3c689477cc32/volumes" Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.085347 4772 generic.go:334] "Generic (PLEG): container finished" podID="29c43aff-8fd8-466f-a957-80f7956fcecb" containerID="7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da" exitCode=0 Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.085401 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfn48" event={"ID":"29c43aff-8fd8-466f-a957-80f7956fcecb","Type":"ContainerDied","Data":"7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da"} Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.085706 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfn48" event={"ID":"29c43aff-8fd8-466f-a957-80f7956fcecb","Type":"ContainerDied","Data":"5adebb5df027451e9600b2e397851337f135e4cc1918eb05d6b5420453bdc936"} Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.085758 4772 scope.go:117] "RemoveContainer" containerID="7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da" Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.085441 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfn48" Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.111185 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfn48"] Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.119085 4772 scope.go:117] "RemoveContainer" containerID="6936cad939c469e5e88dcd3aa75dda577d93ad34d1862f17dcd1bdc9135da130" Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.125067 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfn48"] Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.139958 4772 scope.go:117] "RemoveContainer" containerID="baef195a8947fdfbea3754d76206fc145cc23ba650c9b7efbd2c326c545746a5" Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.192104 4772 scope.go:117] "RemoveContainer" containerID="7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da" Nov 28 12:17:02 crc kubenswrapper[4772]: E1128 12:17:02.192657 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da\": container with ID starting with 7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da not found: ID does not exist" containerID="7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da" Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.192711 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da"} err="failed to get container status \"7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da\": rpc error: code = NotFound desc = could not find container \"7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da\": container with ID starting with 7c8e2e119a9ee7e1e20b497d9e8b834c356884bbf6777fe685f8798b5a3721da not found: ID does not exist" Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.192753 4772 scope.go:117] "RemoveContainer" containerID="6936cad939c469e5e88dcd3aa75dda577d93ad34d1862f17dcd1bdc9135da130" Nov 28 12:17:02 crc kubenswrapper[4772]: E1128 12:17:02.193309 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6936cad939c469e5e88dcd3aa75dda577d93ad34d1862f17dcd1bdc9135da130\": container with ID starting with 6936cad939c469e5e88dcd3aa75dda577d93ad34d1862f17dcd1bdc9135da130 not found: ID does not exist" containerID="6936cad939c469e5e88dcd3aa75dda577d93ad34d1862f17dcd1bdc9135da130" Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.193410 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6936cad939c469e5e88dcd3aa75dda577d93ad34d1862f17dcd1bdc9135da130"} err="failed to get container status \"6936cad939c469e5e88dcd3aa75dda577d93ad34d1862f17dcd1bdc9135da130\": rpc error: code = NotFound desc = could not find container \"6936cad939c469e5e88dcd3aa75dda577d93ad34d1862f17dcd1bdc9135da130\": container with ID starting with 6936cad939c469e5e88dcd3aa75dda577d93ad34d1862f17dcd1bdc9135da130 not found: ID does not exist" Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.193506 4772 scope.go:117] "RemoveContainer" containerID="baef195a8947fdfbea3754d76206fc145cc23ba650c9b7efbd2c326c545746a5" Nov 28 12:17:02 crc kubenswrapper[4772]: E1128 12:17:02.193856 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baef195a8947fdfbea3754d76206fc145cc23ba650c9b7efbd2c326c545746a5\": container with ID starting with baef195a8947fdfbea3754d76206fc145cc23ba650c9b7efbd2c326c545746a5 not found: ID does not exist" containerID="baef195a8947fdfbea3754d76206fc145cc23ba650c9b7efbd2c326c545746a5" Nov 28 12:17:02 crc kubenswrapper[4772]: I1128 12:17:02.193905 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baef195a8947fdfbea3754d76206fc145cc23ba650c9b7efbd2c326c545746a5"} err="failed to get container status \"baef195a8947fdfbea3754d76206fc145cc23ba650c9b7efbd2c326c545746a5\": rpc error: code = NotFound desc = could not find container \"baef195a8947fdfbea3754d76206fc145cc23ba650c9b7efbd2c326c545746a5\": container with ID starting with baef195a8947fdfbea3754d76206fc145cc23ba650c9b7efbd2c326c545746a5 not found: ID does not exist" Nov 28 12:17:04 crc kubenswrapper[4772]: I1128 12:17:04.007676 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c43aff-8fd8-466f-a957-80f7956fcecb" path="/var/lib/kubelet/pods/29c43aff-8fd8-466f-a957-80f7956fcecb/volumes" Nov 28 12:17:05 crc kubenswrapper[4772]: I1128 12:17:05.862086 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/util/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.085736 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/util/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.091236 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/pull/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.104147 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/pull/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.285405 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/extract/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.292143 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/pull/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.300139 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fsqthn_069b2332-5974-4cab-b15b-dfa1985ebce1/util/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.442111 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/util/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.590485 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/util/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.643646 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/pull/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.643814 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/pull/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.820351 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/pull/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.857971 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/extract/0.log" Nov 28 12:17:06 crc kubenswrapper[4772]: I1128 12:17:06.900102 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c8htd_53a7011a-aa17-41ce-9010-9cc9bb873b56/util/0.log" Nov 28 12:17:07 crc kubenswrapper[4772]: I1128 12:17:07.027192 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/extract-utilities/0.log" Nov 28 12:17:07 crc kubenswrapper[4772]: I1128 12:17:07.180071 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/extract-content/0.log" Nov 28 12:17:07 crc kubenswrapper[4772]: I1128 12:17:07.197192 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/extract-content/0.log" Nov 28 12:17:07 crc kubenswrapper[4772]: I1128 12:17:07.204207 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/extract-utilities/0.log" Nov 28 12:17:07 crc kubenswrapper[4772]: I1128 12:17:07.375786 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/extract-utilities/0.log" Nov 28 12:17:07 crc kubenswrapper[4772]: I1128 12:17:07.428466 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/extract-content/0.log" Nov 28 12:17:07 crc kubenswrapper[4772]: I1128 12:17:07.592930 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/extract-utilities/0.log" Nov 28 12:17:07 crc kubenswrapper[4772]: I1128 12:17:07.740218 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/extract-utilities/0.log" Nov 28 12:17:07 crc kubenswrapper[4772]: I1128 12:17:07.845380 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/extract-content/0.log" Nov 28 12:17:07 crc kubenswrapper[4772]: I1128 12:17:07.864688 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/extract-content/0.log" Nov 28 12:17:07 crc kubenswrapper[4772]: I1128 12:17:07.993804 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9h6kq_08428ad5-f854-4a72-a10a-bc72715b05a0/registry-server/0.log" Nov 28 12:17:08 crc kubenswrapper[4772]: I1128 12:17:08.037623 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/extract-utilities/0.log" Nov 28 12:17:08 crc kubenswrapper[4772]: I1128 12:17:08.047525 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/extract-content/0.log" Nov 28 12:17:08 crc kubenswrapper[4772]: I1128 12:17:08.268686 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4slqp_c6c49597-5e3b-44ab-9b76-cb54e6c65736/marketplace-operator/0.log" Nov 28 12:17:08 crc kubenswrapper[4772]: I1128 12:17:08.472194 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/extract-utilities/0.log" Nov 28 12:17:08 crc kubenswrapper[4772]: I1128 12:17:08.614412 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-42knr_42c5efb9-80ec-409e-9e49-326461bfa739/registry-server/0.log" Nov 28 12:17:08 crc kubenswrapper[4772]: I1128 12:17:08.701000 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/extract-content/0.log" Nov 28 12:17:08 crc kubenswrapper[4772]: I1128 12:17:08.763520 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/extract-content/0.log" Nov 28 12:17:08 crc kubenswrapper[4772]: I1128 12:17:08.776294 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/extract-utilities/0.log" Nov 28 12:17:08 crc kubenswrapper[4772]: I1128 12:17:08.906681 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/extract-content/0.log" Nov 28 12:17:08 crc kubenswrapper[4772]: I1128 12:17:08.945554 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/extract-utilities/0.log" Nov 28 12:17:09 crc kubenswrapper[4772]: I1128 12:17:09.097984 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/extract-utilities/0.log" Nov 28 12:17:09 crc kubenswrapper[4772]: I1128 12:17:09.131177 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zw5p4_37640314-124c-448f-bb98-bc81f7b7ab0f/registry-server/0.log" Nov 28 12:17:09 crc kubenswrapper[4772]: I1128 12:17:09.301836 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/extract-content/0.log" Nov 28 12:17:09 crc kubenswrapper[4772]: I1128 12:17:09.349960 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/extract-utilities/0.log" Nov 28 12:17:09 crc kubenswrapper[4772]: I1128 12:17:09.367032 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/extract-content/0.log" Nov 28 12:17:09 crc kubenswrapper[4772]: I1128 12:17:09.495831 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/extract-utilities/0.log" Nov 28 12:17:09 crc kubenswrapper[4772]: I1128 12:17:09.498437 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/extract-content/0.log" Nov 28 12:17:10 crc kubenswrapper[4772]: I1128 12:17:10.071302 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ch7sp_89b22f99-c5ec-4cfb-9ebc-c2f81c9dc932/registry-server/0.log" Nov 28 12:18:23 crc kubenswrapper[4772]: I1128 12:18:23.897064 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:18:23 crc kubenswrapper[4772]: I1128 12:18:23.898046 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:18:50 crc kubenswrapper[4772]: I1128 12:18:50.234593 4772 generic.go:334] "Generic (PLEG): container finished" podID="9465913e-87c9-4d2e-bb14-b6571a93f5ef" containerID="83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989" exitCode=0 Nov 28 12:18:50 crc kubenswrapper[4772]: I1128 12:18:50.234660 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chccj/must-gather-qh9h6" event={"ID":"9465913e-87c9-4d2e-bb14-b6571a93f5ef","Type":"ContainerDied","Data":"83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989"} Nov 28 12:18:50 crc kubenswrapper[4772]: I1128 12:18:50.235926 4772 scope.go:117] "RemoveContainer" containerID="83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989" Nov 28 12:18:50 crc kubenswrapper[4772]: I1128 12:18:50.621988 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-chccj_must-gather-qh9h6_9465913e-87c9-4d2e-bb14-b6571a93f5ef/gather/0.log" Nov 28 12:18:53 crc kubenswrapper[4772]: I1128 12:18:53.896694 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:18:53 crc kubenswrapper[4772]: I1128 12:18:53.897511 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:19:01 crc kubenswrapper[4772]: I1128 12:19:01.405772 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-chccj/must-gather-qh9h6"] Nov 28 12:19:01 crc kubenswrapper[4772]: I1128 12:19:01.406651 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-chccj/must-gather-qh9h6" podUID="9465913e-87c9-4d2e-bb14-b6571a93f5ef" containerName="copy" containerID="cri-o://bf4c4817959a672e621e7724d87af0257da62d3b557b2bb4d6e639be0de6d291" gracePeriod=2 Nov 28 12:19:01 crc kubenswrapper[4772]: I1128 12:19:01.417240 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-chccj/must-gather-qh9h6"] Nov 28 12:19:01 crc kubenswrapper[4772]: I1128 12:19:01.896423 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-chccj_must-gather-qh9h6_9465913e-87c9-4d2e-bb14-b6571a93f5ef/copy/0.log" Nov 28 12:19:01 crc kubenswrapper[4772]: I1128 12:19:01.897098 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/must-gather-qh9h6" Nov 28 12:19:01 crc kubenswrapper[4772]: I1128 12:19:01.993720 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9465913e-87c9-4d2e-bb14-b6571a93f5ef-must-gather-output\") pod \"9465913e-87c9-4d2e-bb14-b6571a93f5ef\" (UID: \"9465913e-87c9-4d2e-bb14-b6571a93f5ef\") " Nov 28 12:19:01 crc kubenswrapper[4772]: I1128 12:19:01.993881 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnf6n\" (UniqueName: \"kubernetes.io/projected/9465913e-87c9-4d2e-bb14-b6571a93f5ef-kube-api-access-pnf6n\") pod \"9465913e-87c9-4d2e-bb14-b6571a93f5ef\" (UID: \"9465913e-87c9-4d2e-bb14-b6571a93f5ef\") " Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.003791 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9465913e-87c9-4d2e-bb14-b6571a93f5ef-kube-api-access-pnf6n" (OuterVolumeSpecName: "kube-api-access-pnf6n") pod "9465913e-87c9-4d2e-bb14-b6571a93f5ef" (UID: "9465913e-87c9-4d2e-bb14-b6571a93f5ef"). InnerVolumeSpecName "kube-api-access-pnf6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.097117 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnf6n\" (UniqueName: \"kubernetes.io/projected/9465913e-87c9-4d2e-bb14-b6571a93f5ef-kube-api-access-pnf6n\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.144238 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9465913e-87c9-4d2e-bb14-b6571a93f5ef-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9465913e-87c9-4d2e-bb14-b6571a93f5ef" (UID: "9465913e-87c9-4d2e-bb14-b6571a93f5ef"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.199402 4772 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9465913e-87c9-4d2e-bb14-b6571a93f5ef-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.373834 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-chccj_must-gather-qh9h6_9465913e-87c9-4d2e-bb14-b6571a93f5ef/copy/0.log" Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.374683 4772 generic.go:334] "Generic (PLEG): container finished" podID="9465913e-87c9-4d2e-bb14-b6571a93f5ef" containerID="bf4c4817959a672e621e7724d87af0257da62d3b557b2bb4d6e639be0de6d291" exitCode=143 Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.374742 4772 scope.go:117] "RemoveContainer" containerID="bf4c4817959a672e621e7724d87af0257da62d3b557b2bb4d6e639be0de6d291" Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.374759 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chccj/must-gather-qh9h6" Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.418922 4772 scope.go:117] "RemoveContainer" containerID="83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989" Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.504012 4772 scope.go:117] "RemoveContainer" containerID="bf4c4817959a672e621e7724d87af0257da62d3b557b2bb4d6e639be0de6d291" Nov 28 12:19:02 crc kubenswrapper[4772]: E1128 12:19:02.504486 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf4c4817959a672e621e7724d87af0257da62d3b557b2bb4d6e639be0de6d291\": container with ID starting with bf4c4817959a672e621e7724d87af0257da62d3b557b2bb4d6e639be0de6d291 not found: ID does not exist" containerID="bf4c4817959a672e621e7724d87af0257da62d3b557b2bb4d6e639be0de6d291" Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.504545 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf4c4817959a672e621e7724d87af0257da62d3b557b2bb4d6e639be0de6d291"} err="failed to get container status \"bf4c4817959a672e621e7724d87af0257da62d3b557b2bb4d6e639be0de6d291\": rpc error: code = NotFound desc = could not find container \"bf4c4817959a672e621e7724d87af0257da62d3b557b2bb4d6e639be0de6d291\": container with ID starting with bf4c4817959a672e621e7724d87af0257da62d3b557b2bb4d6e639be0de6d291 not found: ID does not exist" Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.504577 4772 scope.go:117] "RemoveContainer" containerID="83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989" Nov 28 12:19:02 crc kubenswrapper[4772]: E1128 12:19:02.504948 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989\": container with ID starting with 83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989 not found: ID does not exist" containerID="83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989" Nov 28 12:19:02 crc kubenswrapper[4772]: I1128 12:19:02.505001 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989"} err="failed to get container status \"83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989\": rpc error: code = NotFound desc = could not find container \"83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989\": container with ID starting with 83c8575c8ad55c09f7ab66f115a6f2ee2fe05bbe67c848365e7f78690f62c989 not found: ID does not exist" Nov 28 12:19:04 crc kubenswrapper[4772]: I1128 12:19:04.013678 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9465913e-87c9-4d2e-bb14-b6571a93f5ef" path="/var/lib/kubelet/pods/9465913e-87c9-4d2e-bb14-b6571a93f5ef/volumes" Nov 28 12:19:23 crc kubenswrapper[4772]: I1128 12:19:23.896213 4772 patch_prober.go:28] interesting pod/machine-config-daemon-zfsjk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:19:23 crc kubenswrapper[4772]: I1128 12:19:23.896932 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:19:23 crc kubenswrapper[4772]: I1128 12:19:23.896993 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" Nov 28 12:19:23 crc kubenswrapper[4772]: I1128 12:19:23.897939 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16db3c6e51ec9fa8800b07089a7130627bed271fe59903f0a6b34025d207e7ff"} pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:19:23 crc kubenswrapper[4772]: I1128 12:19:23.898036 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" podUID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerName="machine-config-daemon" containerID="cri-o://16db3c6e51ec9fa8800b07089a7130627bed271fe59903f0a6b34025d207e7ff" gracePeriod=600 Nov 28 12:19:24 crc kubenswrapper[4772]: I1128 12:19:24.635968 4772 generic.go:334] "Generic (PLEG): container finished" podID="8e4e32c1-8c60-4972-ae38-a20020b374fe" containerID="16db3c6e51ec9fa8800b07089a7130627bed271fe59903f0a6b34025d207e7ff" exitCode=0 Nov 28 12:19:24 crc kubenswrapper[4772]: I1128 12:19:24.636021 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerDied","Data":"16db3c6e51ec9fa8800b07089a7130627bed271fe59903f0a6b34025d207e7ff"} Nov 28 12:19:24 crc kubenswrapper[4772]: I1128 12:19:24.636264 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfsjk" event={"ID":"8e4e32c1-8c60-4972-ae38-a20020b374fe","Type":"ContainerStarted","Data":"2141cb181af5607296cdb60620f846d8f7238dbdaac75a36c470af68017c2b0b"} Nov 28 12:19:24 crc kubenswrapper[4772]: I1128 12:19:24.636286 4772 scope.go:117] "RemoveContainer" containerID="513b8b18c11065eb0392cc1eb72d34da8eeb8f940770a54663254c71f335e4b1" Nov 28 12:19:48 crc kubenswrapper[4772]: I1128 12:19:48.462947 4772 scope.go:117] "RemoveContainer" containerID="aec3f00e0932a36a12b11011c73a78ca3852077892214a14105d8db593b52843"